[JENKINS-MAVEN] Lucene-Solr-Maven-5.3 #26: POMs out of sync

2015-09-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.3/26/

No tests ran.

Build Log:
[...truncated 25175 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.3/build.xml:742: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.3/build.xml:231: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.3/lucene/build.xml:415: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.3/lucene/common-build.xml:2245:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.3/lucene/analysis/build.xml:122:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.3/lucene/analysis/build.xml:38:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.3/lucene/common-build.xml:1673:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.3/lucene/common-build.xml:589:
 Error deploying artifact 'org.apache.lucene:lucene-analyzers-icu:jar': Error 
deploying artifact: Failed to transfer file: 
https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-analyzers-icu/5.3.1-SNAPSHOT/lucene-analyzers-icu-5.3.1-20150923.124322-13-javadoc.jar.md5.
 Return code is: 502

Total time: 9 minutes 58 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-5.x-Solaris (64bit/jdk1.8.0) - Build # 74 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/74/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.security.PKIAuthenticationIntegrationTest.testPkiAuth

Error Message:
There are still nodes recoverying - waited for 10 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 10 
seconds
at 
__randomizedtesting.SeedInfo.seed([FA34EC1D9257FE9:3F1DD7903993F848]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:836)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1393)
at 
org.apache.solr.security.PKIAuthenticationIntegrationTest.testPkiAuth(PKIAuthenticationIntegrationTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (LUCENE-6813) OfflineSorter.sort isn't thread-safe

2015-09-23 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904338#comment-14904338
 ] 

Dawid Weiss commented on LUCENE-6813:
-

I don't fully understand the problem but to me OfflineSorter is thread safe -- 
it takes input and output paths, then potentially creates some intermediate 
files which should never cause any threading problems because they're created 
atomically by the file system. OfflineSorter also makes best effort to delete 
these files. If your output already exists, it should be overwritten... where 
is the thread safety problem?

As for Windows and the pending delete queue, can we pinpoint when this is 
happening (is it a leaked file handle, lock)? Perhaps there is a better fix to 
just cater for the delay of file/ folder deletion in Windows (assuming this is 
a documented feature)? If Files.delete returns and the file is not deleted, 
this seems like a bug in the JDK to me?

http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#delete(java.nio.file.Path)



> OfflineSorter.sort isn't thread-safe
> 
>
> Key: LUCENE-6813
> URL: https://issues.apache.org/jira/browse/LUCENE-6813
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6813.patch
>
>
> The new BKD tree classes, and NumericRangeTree (just a 1D BKD tree),
> make heavy use of OfflineSorter to build their data structures at
> indexing time when the number of indexed documents is biggish.
> But when I was first building them (LUCENE-6477), I hit a thread
> safety issue in OfflineSorter, and at that time I just worked around
> it by creating my own private temp directory each time I need to write
> a BKD tree.
> This workaround is sort of messy, and it causes problems with "pending
> delete" files on Windows when we try to remove that temp directory,
> causing test failures like 
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5149/
> I think instead we should fix the root cause ... i.e. make
> OfflineSorter thread safe.  It looks like it's simple...
> Separately I'd like to somehow fix these BKD tests to catch any leaked
> file handles ... I'm not sure they are today.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8069) Ensure that only the valid ZooKeeper registered leader can put a replica into Leader Initiated Recovery.

2015-09-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904408#comment-14904408
 ] 

Mark Miller commented on SOLR-8069:
---

I was out on the phone last night - a fuller reply:

bq. What happens if the leaderZkNodeParentVersion doesn't match? 

The leader cannot update the zk node as we want.

bq. Presumably that's a possibility or else why add the check.

It's the whole point of the patch?

bq. I'm certainly not well versed in this area of the code but checking 
isLeader seems a little roundabount

There is no reason to go to zk if we already know we are not the leader locally 
- what is roundabout about it?

bq. What does that mean?

That the fix worked??





> Ensure that only the valid ZooKeeper registered leader can put a replica into 
> Leader Initiated Recovery.
> 
>
> Key: SOLR-8069
> URL: https://issues.apache.org/jira/browse/SOLR-8069
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
> Attachments: SOLR-8069.patch, SOLR-8069.patch
>
>
> I've seen this twice now. Need to work on a test.
> When some issues hit all the replicas at once, you can end up in a situation 
> where the rightful leader was put or put itself into LIR. Even on restart, 
> this rightful leader won't take leadership and you have to manually clear the 
> LIR nodes.
> It seems that if all the replicas participate in election on startup, LIR 
> should just be cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8073) Solr fails to start on Windows with obscure errors when using relative path

2015-09-23 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904336#comment-14904336
 ] 

Ishan Chattopadhyaya commented on SOLR-8073:


I have been able to reproduce and working on it right now. Looks like a start 
script issue than an authentication issue (though authentication's error 
reporting should be improved here).
However, since a workaround exists, I am wondering if this is really a blocker?

> Solr fails to start on Windows with obscure errors when using relative path
> ---
>
> Key: SOLR-8073
> URL: https://issues.apache.org/jira/browse/SOLR-8073
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
> Environment: Windows 7
>Reporter: Alexandre Rafalovitch
>Priority: Critical
>
> Clean 5.3  (and 5.3.1 RC3) on Windows:
> * bin\solr start -e techproducts
> * Visit Admin UI - all works
> * bin\solr stop -all
> * bin\solr start -s example\techproducts\solr
> * ERROR: Solr at http://localhost:8983/solr did not come online within 30 
> seconds!
> * Visit Admin UI - get an error:
> {quote}
> HTTP ERROR 500
> Problem accessing /solr/. Reason:
> Server Error
> Caused by:
> java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:237)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:186)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {quote}
> Possibly related to SOLR-8068? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Please vote for the 3rd release candidate for Lucene/Solr 5.3.1

2015-09-23 Thread Noble Paul
Vote passed . I shall start the release process

On Thu, Sep 17, 2015 at 10:29 PM, Anshum Gupta  wrote:
> +1 for both Java7 and Java8 !
>
> SUCCESS! [1:12:33.649607]
>
> On Wed, Sep 16, 2015 at 4:22 PM, Noble Paul  wrote:
>>
>> The artifacts can be downloaded from:
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.3.1-RC3-rev1703449/
>>
>> You can run the smoke tester directly with this command:
>> python3 -u dev-tools/scripts/smokeTestRelease.py
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.3.1-RC3-rev1703449/
>>
>>
>> +1
>> SUCCESS! [0:50:32.819792]
>>
>>
>> --
>> -
>> Noble Paul
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
>
>
> --
> Anshum Gupta



-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14280 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14280/
Java: 32bit/jdk1.8.0_60 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([171B72F95A3066FB:B05FCA5D378B7542]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:172)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:133)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:128)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:465)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.clearSourceCollection(BaseCdcrDistributedZkTest.java:319)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTestPartialReplicationWithTruncatedTlog(CdcrReplicationHandlerTest.java:121)
at 
org.apache.solr.cloud.CdcrReplicationHandlerTest.doTest(CdcrReplicationHandlerTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-8069) Ensure that only the valid ZooKeeper registered leader can put a replica into Leader Initiated Recovery.

2015-09-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904442#comment-14904442
 ] 

ASF subversion and git services commented on SOLR-8069:
---

Commit 1704837 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1704837 ]

SOLR-8069: Ensure that only the valid ZooKeeper registered leader can put a 
replica into Leader Initiated Recovery.

> Ensure that only the valid ZooKeeper registered leader can put a replica into 
> Leader Initiated Recovery.
> 
>
> Key: SOLR-8069
> URL: https://issues.apache.org/jira/browse/SOLR-8069
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
> Attachments: SOLR-8069.patch, SOLR-8069.patch
>
>
> I've seen this twice now. Need to work on a test.
> When some issues hit all the replicas at once, you can end up in a situation 
> where the rightful leader was put or put itself into LIR. Even on restart, 
> this rightful leader won't take leadership and you have to manually clear the 
> LIR nodes.
> It seems that if all the replicas participate in election on startup, LIR 
> should just be cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8077) Replication can still cause index corruption.

2015-09-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904461#comment-14904461
 ] 

ASF subversion and git services commented on SOLR-8077:
---

Commit 1704840 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1704840 ]

SOLR-8077: Replication can still cause index corruption.

> Replication can still cause index corruption.
> -
>
> Key: SOLR-8077
> URL: https://issues.apache.org/jira/browse/SOLR-8077
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>
> Bah. Somehow a critical part of SOLR-7134 did not get in with the commit.
> {code}
>if (slowFileExists(indexDir, fname)) {
> -LOG.info("Skipping move file - it already exists:" + fname);
> -return true;
> +LOG.warn("Cannot complete replication attempt because file already 
> exists:" + fname);
> +
> +// we fail - we downloaded the files we need, if we can't move one 
> in, we can't
> +// count on the correct index
> +return false;
>}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6813) OfflineSorter.sort isn't thread-safe

2015-09-23 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-6813:
---
Attachment: LUCENE-6813.patch

Patch, but I still need to add a thread test to see if it provokes the original 
issue I hit.

I think the only reason why OfflineSorter.sort wasn't thread safe was because 
it removed the output file up front, instead of replacing it later with the 
atomic move ... I just removed that Files.deleteIfExists and then added 
StandardCopyOption.REPLACE_EXISTING later when we do the Files.move.

> OfflineSorter.sort isn't thread-safe
> 
>
> Key: LUCENE-6813
> URL: https://issues.apache.org/jira/browse/LUCENE-6813
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6813.patch
>
>
> The new BKD tree classes, and NumericRangeTree (just a 1D BKD tree),
> make heavy use of OfflineSorter to build their data structures at
> indexing time when the number of indexed documents is biggish.
> But when I was first building them (LUCENE-6477), I hit a thread
> safety issue in OfflineSorter, and at that time I just worked around
> it by creating my own private temp directory each time I need to write
> a BKD tree.
> This workaround is sort of messy, and it causes problems with "pending
> delete" files on Windows when we try to remove that temp directory,
> causing test failures like 
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5149/
> I think instead we should fix the root cause ... i.e. make
> OfflineSorter thread safe.  It looks like it's simple...
> Separately I'd like to somehow fix these BKD tests to catch any leaked
> file handles ... I'm not sure they are today.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6813) OfflineSorter.sort isn't thread-safe

2015-09-23 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904341#comment-14904341
 ] 

Dawid Weiss commented on LUCENE-6813:
-

Also, this looks suspicious to me in OfflineSorter:
{code}
// If simple rename doesn't work this means the output is
// on a different volume or something. Copy the input then.
try {
  Files.move(single, output, StandardCopyOption.ATOMIC_MOVE);
} catch (IOException | UnsupportedOperationException e) {
  Files.copy(single, output);
}
{code}
because Files.move should move files across volumes (so if it throws an 
exception then calling copy duplicates effort):
http://docs.oracle.com/javase/7/docs/api/java/nio/file/Files.html#move(java.nio.file.Path,%20java.nio.file.Path,%20java.nio.file.CopyOption...)

This may be a left-over piece of code from when File.renameTo was used (which 
indeed doesn't work across volumes).

> OfflineSorter.sort isn't thread-safe
> 
>
> Key: LUCENE-6813
> URL: https://issues.apache.org/jira/browse/LUCENE-6813
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6813.patch
>
>
> The new BKD tree classes, and NumericRangeTree (just a 1D BKD tree),
> make heavy use of OfflineSorter to build their data structures at
> indexing time when the number of indexed documents is biggish.
> But when I was first building them (LUCENE-6477), I hit a thread
> safety issue in OfflineSorter, and at that time I just worked around
> it by creating my own private temp directory each time I need to write
> a BKD tree.
> This workaround is sort of messy, and it causes problems with "pending
> delete" files on Windows when we try to remove that temp directory,
> causing test failures like 
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5149/
> I think instead we should fix the root cause ... i.e. make
> OfflineSorter thread safe.  It looks like it's simple...
> Separately I'd like to somehow fix these BKD tests to catch any leaked
> file handles ... I'm not sure they are today.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8069) Ensure that only the valid ZooKeeper registered leader can put a replica into Leader Initiated Recovery.

2015-09-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904435#comment-14904435
 ] 

Mark Miller commented on SOLR-8069:
---

So this adds sensible local isLeader checks where we were already checking ZK, 
it passes the core descriptor instead of just a name to LIR so it has a lot 
more context to work with, and it ensures that only the registered ZK leader 
can put a replica into LIR.

Barring any bugs in the current code, let's open further issues for other 
changes / improvements.

> Ensure that only the valid ZooKeeper registered leader can put a replica into 
> Leader Initiated Recovery.
> 
>
> Key: SOLR-8069
> URL: https://issues.apache.org/jira/browse/SOLR-8069
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
> Attachments: SOLR-8069.patch, SOLR-8069.patch
>
>
> I've seen this twice now. Need to work on a test.
> When some issues hit all the replicas at once, you can end up in a situation 
> where the rightful leader was put or put itself into LIR. Even on restart, 
> this rightful leader won't take leadership and you have to manually clear the 
> LIR nodes.
> It seems that if all the replicas participate in election on startup, LIR 
> should just be cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 13995 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13995/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 62267 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:785: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:665: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:652: Source checkout is 
dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 65 minutes 33 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8069) Ensure that only the valid ZooKeeper registered leader can put a replica into Leader Initiated Recovery.

2015-09-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904431#comment-14904431
 ] 

ASF subversion and git services commented on SOLR-8069:
---

Commit 1704836 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1704836 ]

SOLR-8069: Ensure that only the valid ZooKeeper registered leader can put a 
replica into Leader Initiated Recovery.

> Ensure that only the valid ZooKeeper registered leader can put a replica into 
> Leader Initiated Recovery.
> 
>
> Key: SOLR-8069
> URL: https://issues.apache.org/jira/browse/SOLR-8069
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
> Attachments: SOLR-8069.patch, SOLR-8069.patch
>
>
> I've seen this twice now. Need to work on a test.
> When some issues hit all the replicas at once, you can end up in a situation 
> where the rightful leader was put or put itself into LIR. Even on restart, 
> this rightful leader won't take leadership and you have to manually clear the 
> LIR nodes.
> It seems that if all the replicas participate in election on startup, LIR 
> should just be cleared.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8077) Replication can still cause index corruption.

2015-09-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904469#comment-14904469
 ] 

ASF subversion and git services commented on SOLR-8077:
---

Commit 1704841 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1704841 ]

SOLR-8077: Replication can still cause index corruption.

> Replication can still cause index corruption.
> -
>
> Key: SOLR-8077
> URL: https://issues.apache.org/jira/browse/SOLR-8077
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>
> Bah. Somehow a critical part of SOLR-7134 did not get in with the commit.
> {code}
>if (slowFileExists(indexDir, fname)) {
> -LOG.info("Skipping move file - it already exists:" + fname);
> -return true;
> +LOG.warn("Cannot complete replication attempt because file already 
> exists:" + fname);
> +
> +// we fail - we downloaded the files we need, if we can't move one 
> in, we can't
> +// count on the correct index
> +return false;
>}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8073) Solr fails to start on Windows with obscure errors when using relative path

2015-09-23 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-8073:
---
Attachment: SOLR-8073.patch

Here's a patch that fixes the issue. The fix here is to convert the relative 
path to absolute path first (if it exists). 

In the unix script, the checks are similar: if absolute path, use it; else if 
relative path (doesn't start with /), then make it absolute:
{noformat}
  if [[ $SOLR_HOME != /* ]] && [[ -d "$SOLR_SERVER_DIR/$SOLR_HOME" ]]; then
SOLR_HOME="$SOLR_SERVER_DIR/$SOLR_HOME"
SOLR_PID_DIR="$SOLR_HOME"
  elif [[ $SOLR_HOME != /* ]] && [[ -d "`pwd`/$SOLR_HOME" ]]; then
SOLR_HOME="$(pwd)/$SOLR_HOME"
  fi
{noformat}

For the null pointer exception (which is ugly, since it masks the real problem 
that the core container is not loaded), I suggest we also commit my patch for 
SOLR-8068, which checks cores for null right at the beginning of SDF's 
doFilter() call.

> Solr fails to start on Windows with obscure errors when using relative path
> ---
>
> Key: SOLR-8073
> URL: https://issues.apache.org/jira/browse/SOLR-8073
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
> Environment: Windows 7
>Reporter: Alexandre Rafalovitch
>Priority: Critical
> Attachments: SOLR-8073.patch
>
>
> Clean 5.3  (and 5.3.1 RC3) on Windows:
> * bin\solr start -e techproducts
> * Visit Admin UI - all works
> * bin\solr stop -all
> * bin\solr start -s example\techproducts\solr
> * ERROR: Solr at http://localhost:8983/solr did not come online within 30 
> seconds!
> * Visit Admin UI - get an error:
> {quote}
> HTTP ERROR 500
> Problem accessing /solr/. Reason:
> Server Error
> Caused by:
> java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:237)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:186)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {quote}
> Possibly related to SOLR-8068? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8068) NPE in SolrDispatchFilter.authenticateRequest

2015-09-23 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904488#comment-14904488
 ] 

Ishan Chattopadhyaya commented on SOLR-8068:


[~anshumg] this situation can be hit any time the core container didn't 
initialize properly. The NPE on authentication in such a situation looks 
confusing at that time, and hence I think we should have either my patch or 
your patch to go in. Such a situation was observed in SOLR-8073.

> NPE in SolrDispatchFilter.authenticateRequest
> -
>
> Key: SOLR-8068
> URL: https://issues.apache.org/jira/browse/SOLR-8068
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Markus Jelsma
> Fix For: 5.4
>
> Attachments: SOLR-8068.patch, SOLR-8068.patch, 
> solr-core-5.3.0-SNAPSHOT.jar
>
>
> Suddenly, one of our Solr 5.3 nodes responds with the following trace when i 
> send a delete all query via SolrJ.
> {code}
> java.lang.NullPointerException
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:237)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:186)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.3-Java7 - Build # 57 - Failure

2015-09-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.3-Java7/57/

3 tests failed.
REGRESSION:  org.apache.solr.cloud.TestRebalanceLeaders.test

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:56673/_gf/qg, http://127.0.0.1:57285/_gf/qg, 
http://127.0.0.1:33359/_gf/qg, http://127.0.0.1:49922/_gf/qg, 
http://127.0.0.1:51847/_gf/qg]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:56673/_gf/qg, 
http://127.0.0.1:57285/_gf/qg, http://127.0.0.1:33359/_gf/qg, 
http://127.0.0.1:49922/_gf/qg, http://127.0.0.1:51847/_gf/qg]
at 
__randomizedtesting.SeedInfo.seed([6DE497A5A2A21D81:E5B0A87F0C5E7079]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1098)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:869)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:805)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.TestRebalanceLeaders.issueCommands(TestRebalanceLeaders.java:281)
at 
org.apache.solr.cloud.TestRebalanceLeaders.rebalanceLeaderTest(TestRebalanceLeaders.java:108)
at 
org.apache.solr.cloud.TestRebalanceLeaders.test(TestRebalanceLeaders.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Artifacts-5.3 - Build # 22 - Failure

2015-09-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-5.3/22/

No tests ran.

Build Log:
[...truncated 12788 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-5.3/lucene/build.xml:353: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-5.3/lucene/common-build.xml:2606:
 java.net.ConnectException: Connection timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:618)
at 
sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:160)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:275)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:371)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at 
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:932)
at 
sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at 
sun.net.www.protocol.https.HttpsURLConnectionImpl.connect(HttpsURLConnectionImpl.java:153)
at 
org.apache.tools.ant.taskdefs.Get$GetThread.openConnection(Get.java:660)
at org.apache.tools.ant.taskdefs.Get$GetThread.get(Get.java:579)
at org.apache.tools.ant.taskdefs.Get$GetThread.run(Get.java:569)

Total time: 6 minutes 48 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Artifacts-5.3 #21
Archived 6 artifacts
Archive block size is 32768
Received 1108 blocks and 112598662 bytes
Compression is 24.4%
Took 36 sec
Publishing Javadoc
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8081) When creating a collection, we need a way to utilize multiple disks available on a node.

2015-09-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904373#comment-14904373
 ] 

Jan Høydahl commented on SOLR-8081:
---

In the good old days when I worked for FAST, that were the days when RAM was 
expensive and all search engines were disk-bound. It was ultra important to get 
maximum disk I/O throughput, and the solution was either SAN with fibre channel 
or multiple local disks with RAID (typically HW RAID), striping with well tuned 
block sizes and stripe sizes. Then you got way better sequential read 
performance than from single disks.

But these days we use RAM/CPU much more, indexes are smaller and we do not need 
to invest that much in optimized disk systems except for special cases. I'm not 
a disk expert either but I'd argue that the complexity of juggling cores 
between multiple disks, each which can run full at different times, as well as 
the unavoidable future requirement to migrate an existing core from one disk to 
another etc may not be worth it; as long as the problem can be solved futher 
down the stack. https://en.wikipedia.org/wiki/RAID

> When creating a collection, we need a way to utilize multiple disks available 
> on a node.
> 
>
> Key: SOLR-8081
> URL: https://issues.apache.org/jira/browse/SOLR-8081
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Timothy Potter
>
> Currently, if I want to change the dataDir for a core (such as to utilize 
> multiple disks on a Solr node), I need to either setup a symlink or change 
> the dataDir property in core.properties and restart the Solr node. For 
> instance, if I have a 4-node SolrCloud cluster and want to create a 
> collection with 4 shards with rf=2, then 8 Solr cores will be created across 
> the cluster, 2 per node. If I want to have each core use a separate disk, 
> then I have to do that after the fact. I'm aware that I could create the 
> collection with rf=1 and then use AddReplica to add additional replicas with 
> a different dataDir set, but that feels cumbersome as well.
> What would be nice is to have a way for me to specify available disks and 
> have Solr use that information when provisioning cores on the node. 
> [~anshumg] mentioned this might be best accomplished with a replica placement 
> strategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_60) - Build # 5279 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5279/
Java: 64bit/jdk1.8.0_60 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 61683 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:775: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:655: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:642: Source 
checkout is dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 94 minutes 41 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (LUCENE-6813) OfflineSorter.sort isn't thread-safe

2015-09-23 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-6813:
--

 Summary: OfflineSorter.sort isn't thread-safe
 Key: LUCENE-6813
 URL: https://issues.apache.org/jira/browse/LUCENE-6813
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.4


The new BKD tree classes, and NumericRangeTree (just a 1D BKD tree),
make heavy use of OfflineSorter to build their data structures at
indexing time when the number of indexed documents is biggish.

But when I was first building them (LUCENE-6477), I hit a thread
safety issue in OfflineSorter, and at that time I just worked around
it by creating my own private temp directory each time I need to write
a BKD tree.

This workaround is sort of messy, and it causes problems with "pending
delete" files on Windows when we try to remove that temp directory,
causing test failures like 
http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5149/

I think instead we should fix the root cause ... i.e. make
OfflineSorter thread safe.  It looks like it's simple...

Separately I'd like to somehow fix these BKD tests to catch any leaked
file handles ... I'm not sure they are today.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3544 - Failure

2015-09-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3544/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior

Error Message:
Illegal state, was: down expected:active clusterState:live 
nodes:[]collections:{c1=DocCollection(c1)={   "shards":{"shard1":{   
"state":"active",   "range":null,   "parent":null,   
"replicas":{"core_node1":{   "base_url":"http://127.0.0.1/solr;,
   "node_name":"node1",   "core":"core1",   "roles":"", 
  "state":"down",   "router":{"name":"implicit"}}, 
test=LazyCollectionRef(test)}

Stack Trace:
java.lang.AssertionError: Illegal state, was: down expected:active 
clusterState:live nodes:[]collections:{c1=DocCollection(c1)={
  "shards":{"shard1":{
  "state":"active",
  "range":null,
  "parent":null,
  "replicas":{"core_node1":{
  "base_url":"http://127.0.0.1/solr;,
  "node_name":"node1",
  "core":"core1",
  "roles":"",
  "state":"down",
  "router":{"name":"implicit"}}, test=LazyCollectionRef(test)}
at 
__randomizedtesting.SeedInfo.seed([C01D695AE82E3942:A8036AB60ABE630C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.OverseerTest.verifyStatus(OverseerTest.java:601)
at 
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior(OverseerTest.java:1261)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 

[jira] [Resolved] (SOLR-8077) Replication can still cause index corruption.

2015-09-23 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-8077.
---
   Resolution: Fixed
Fix Version/s: 5.4
   Trunk

> Replication can still cause index corruption.
> -
>
> Key: SOLR-8077
> URL: https://issues.apache.org/jira/browse/SOLR-8077
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
> Fix For: Trunk, 5.4
>
>
> Bah. Somehow a critical part of SOLR-7134 did not get in with the commit.
> {code}
>if (slowFileExists(indexDir, fname)) {
> -LOG.info("Skipping move file - it already exists:" + fname);
> -return true;
> +LOG.warn("Cannot complete replication attempt because file already 
> exists:" + fname);
> +
> +// we fail - we downloaded the files we need, if we can't move one 
> in, we can't
> +// count on the correct index
> +return false;
>}
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8073) Solr fails to start on Windows with obscure errors when using relative path

2015-09-23 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904483#comment-14904483
 ] 

Ishan Chattopadhyaya commented on SOLR-8073:


[~arafalov] Can you please test the patch?

> Solr fails to start on Windows with obscure errors when using relative path
> ---
>
> Key: SOLR-8073
> URL: https://issues.apache.org/jira/browse/SOLR-8073
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
> Environment: Windows 7
>Reporter: Alexandre Rafalovitch
>Priority: Critical
> Attachments: SOLR-8073.patch
>
>
> Clean 5.3  (and 5.3.1 RC3) on Windows:
> * bin\solr start -e techproducts
> * Visit Admin UI - all works
> * bin\solr stop -all
> * bin\solr start -s example\techproducts\solr
> * ERROR: Solr at http://localhost:8983/solr did not come online within 30 
> seconds!
> * Visit Admin UI - get an error:
> {quote}
> HTTP ERROR 500
> Problem accessing /solr/. Reason:
> Server Error
> Caused by:
> java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:237)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:186)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {quote}
> Possibly related to SOLR-8068? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8075) Leader Initiated Recovery should not stop a leader that participated in an election with all of it's replicas from becoming a valid leader.

2015-09-23 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8075:
--
Attachment: SOLR-8075.patch

One more, updated to trunk and cleaned up a bit.

> Leader Initiated Recovery should not stop a leader that participated in an 
> election with all of it's replicas from becoming a valid leader.
> ---
>
> Key: SOLR-8075
> URL: https://issues.apache.org/jira/browse/SOLR-8075
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8075.patch, SOLR-8075.patch, SOLR-8075.patch
>
>
> Currently, because of SOLR-8069, all the replicas in a shard can be put into 
> LIR.
> If you restart such a shard, the valid leader will will win the election and 
> sync with the shard and then be blocked from registering as ACTIVE because it 
> is in LIR.
> I think that is a little wonky because I don't think it even tries another 
> candidate because the leader that cannot publish ACTIVE does not have it's 
> election canceled.
> While SOLR-8069 should prevent this situation, we should add logic to allow a 
> leader that can sync with it's full shard to become leader and publish ACTIVE 
> regardless of LIR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b78) - Build # 14281 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14281/
Java: 32bit/jdk1.9.0-ea-b78 -client -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=5920, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)2) Thread[id=5917, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)3) Thread[id=5918, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)4) Thread[id=5921, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)5) Thread[id=5919, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:746)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=5920, name=ou=system.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at 

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2751 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2751/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior

Error Message:
Illegal state, was: down expected:active clusterState:live 
nodes:[]collections:{c1=DocCollection(c1)={   "shards":{"shard1":{   
"parent":null,   "range":null,   "state":"active",   
"replicas":{"core_node1":{   "base_url":"http://127.0.0.1/solr;,
   "node_name":"node1",   "core":"core1",   "roles":"", 
  "state":"down",   "router":{"name":"implicit"}}, 
test=LazyCollectionRef(test)}

Stack Trace:
java.lang.AssertionError: Illegal state, was: down expected:active 
clusterState:live nodes:[]collections:{c1=DocCollection(c1)={
  "shards":{"shard1":{
  "parent":null,
  "range":null,
  "state":"active",
  "replicas":{"core_node1":{
  "base_url":"http://127.0.0.1/solr;,
  "node_name":"node1",
  "core":"core1",
  "roles":"",
  "state":"down",
  "router":{"name":"implicit"}}, test=LazyCollectionRef(test)}
at 
__randomizedtesting.SeedInfo.seed([D764EB7D33C537B:65684D5B31AC0935]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.OverseerTest.verifyStatus(OverseerTest.java:601)
at 
org.apache.solr.cloud.OverseerTest.testExternalClusterStateChangeBehavior(OverseerTest.java:1261)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)

[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_80) - Build # 5150 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5150/
Java: 32bit/jdk1.7.0_80 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestRandomRequestDistribution.testRequestTracking

Error Message:
Shard a1x2_shard1_replica1 received all 10 requests

Stack Trace:
java.lang.AssertionError: Shard a1x2_shard1_replica1 received all 10 requests
at 
__randomizedtesting.SeedInfo.seed([5BF095330FFDB8A2:13F3FBF6A934]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.TestRandomRequestDistribution.testRequestTracking(TestRandomRequestDistribution.java:109)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-8075) Leader Initiated Recovery should not stop a leader that participated in an election with all of it's replicas from becoming a valid leader.

2015-09-23 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8075:
--
Attachment: SOLR-8075.patch

New patch - last one missed my new test.

> Leader Initiated Recovery should not stop a leader that participated in an 
> election with all of it's replicas from becoming a valid leader.
> ---
>
> Key: SOLR-8075
> URL: https://issues.apache.org/jira/browse/SOLR-8075
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8075.patch, SOLR-8075.patch
>
>
> Currently, because of SOLR-8069, all the replicas in a shard can be put into 
> LIR.
> If you restart such a shard, the valid leader will will win the election and 
> sync with the shard and then be blocked from registering as ACTIVE because it 
> is in LIR.
> I think that is a little wonky because I don't think it even tries another 
> candidate because the leader that cannot publish ACTIVE does not have it's 
> election canceled.
> While SOLR-8069 should prevent this situation, we should add logic to allow a 
> leader that can sync with it's full shard to become leader and publish ACTIVE 
> regardless of LIR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-09-23 Thread Terry Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904650#comment-14904650
 ] 

Terry Smith commented on LUCENE-6699:
-

Karl, were you able to find that packing scheme? I'm interested in poking the 
x,y,z values into a SortedNumericDocValuesField to see how well it would 
perform.


> Integrate lat/lon BKD and spatial3d
> ---
>
> Key: LUCENE-6699
> URL: https://issues.apache.org/jira/browse/LUCENE-6699
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.4
>
> Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch
>
>
> I'm opening this for discussion, because I'm not yet sure how to do
> this integration, because of my ignorance about spatial in general and
> spatial3d in particular :)
> Our BKD tree impl is very fast at doing lat/lon shape intersection
> (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
> points.
> I think to integrate with spatial3d, we would first need to record
> lat/lon/z into doc values.  Somewhere I saw discussion about how we
> could stuff all 3 into a single long value with acceptable precision
> loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
> to do the fast per-hit query time filtering.
> But, second: what do we index into the BKD tree?  Can we "just" index
> earth surface lat/lon, and then at query time is spatial3d able to
> give me an enclosing "surface lat/lon" bbox for a 3d shape?  Or
> ... must we index all 3 dimensions into the BKD tree (seems like this
> could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8068) NPE in SolrDispatchFilter.authenticateRequest

2015-09-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904520#comment-14904520
 ] 

Noble Paul commented on SOLR-8068:
--

[~anshumg] I think that if the corecontainer is not initialized properly we 
should throw a sensible error in the beginning itself. Otherwise it will 
manifest in some kind of NPE or other errors at a different line, which may not 
be consistent at all 

> NPE in SolrDispatchFilter.authenticateRequest
> -
>
> Key: SOLR-8068
> URL: https://issues.apache.org/jira/browse/SOLR-8068
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Markus Jelsma
> Fix For: 5.4
>
> Attachments: SOLR-8068.patch, SOLR-8068.patch, 
> solr-core-5.3.0-SNAPSHOT.jar
>
>
> Suddenly, one of our Solr 5.3 nodes responds with the following trace when i 
> send a delete all query via SolrJ.
> {code}
> java.lang.NullPointerException
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:237)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:186)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8087) Look into defensive check in publish that will not let a replica in LIR publish ACTIVE.

2015-09-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904553#comment-14904553
 ] 

Mark Miller commented on SOLR-8087:
---

It may be that this is just a fail safe and that it was doing it's job due to 
SOLR-8069?

My test that tickles this in SOLR-8075 was pre SOLR-8069 and simulates and an 
issue you should not run into with it (hopefully). But I was a little concerned 
we ended up hitting this failsafe rather than something that would allow 
another replica attempt leadership. 

> Look into defensive check in publish that will not let a replica in LIR 
> publish ACTIVE.
> ---
>
> Key: SOLR-8087
> URL: https://issues.apache.org/jira/browse/SOLR-8087
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>
> What I am worried about here is that if you hit this situation, how is the 
> election canceled? It seems like perhaps the leader can't publish ACTIVE and 
> then the shard is locked even if another replica could be leader?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8073) Solr fails to start on Windows with obscure errors when using relative path

2015-09-23 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8073?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904509#comment-14904509
 ] 

Alexandre Rafalovitch commented on SOLR-8073:
-

It works for me. Thank you.

> Solr fails to start on Windows with obscure errors when using relative path
> ---
>
> Key: SOLR-8073
> URL: https://issues.apache.org/jira/browse/SOLR-8073
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
> Environment: Windows 7
>Reporter: Alexandre Rafalovitch
>Priority: Critical
> Attachments: SOLR-8073.patch
>
>
> Clean 5.3  (and 5.3.1 RC3) on Windows:
> * bin\solr start -e techproducts
> * Visit Admin UI - all works
> * bin\solr stop -all
> * bin\solr start -s example\techproducts\solr
> * ERROR: Solr at http://localhost:8983/solr did not come online within 30 
> seconds!
> * Visit Admin UI - get an error:
> {quote}
> HTTP ERROR 500
> Problem accessing /solr/. Reason:
> Server Error
> Caused by:
> java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:237)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:186)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {quote}
> Possibly related to SOLR-8068? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8085) ChaosMonkey Safe Leader Test fail with shard inconsistency.

2015-09-23 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904585#comment-14904585
 ] 

Yonik Seeley commented on SOLR-8085:


OK, here's an analyisis of fail.150922_130608

it looks like LIR happens during normal recovery after startup, and we finally 
end up doing recovery with recoverAfterStartup=false, which uses recent 
versions in peersync (and include docs that have been buffered while we were 
recovering) rather than the true startup versions.  This causes peersync to 
pass when it should not have.

{code}

(add first appears, node 47239 appears to be coming up at the time)
  2> 74317 INFO  (qtp324612161-378) [n:127.0.0.1:40940_ c:collection1 s:shard3 
r:core_node6 x:collection1] o.a.s.u.p.LogUpdateProcessor [collection1] webapp= 
path=/update 
params={update.distrib=FROMLEADER=http://127.0.0.1:38911/collection1/=javabin=2}
 {add=[0-333 (1513033816377131008)]} 0 19
  2> 74317 INFO  (qtp324612161-378) [n:127.0.0.1:40940_ c:collection1 s:shard3 
r:core_node6 x:collection1] o.a.s.u.p.LogUpdateProcessor [collection1] webapp= 
path=/update 
params={update.distrib=FROMLEADER=http://127.0.0.1:38911/collection1/=javabin=2}
 {add=[0-333 (1513033816377131008)]} 0 19
  2> 74317 INFO  (qtp661063741-234) [n:127.0.0.1:38911_ c:collection1 s:shard3 
r:core_node2 x:collection1] o.a.s.u.p.LogUpdateProcessor [collection1] webapp= 
path=/update params={wt=javabin=2} {add=[0-333 (1513033816377131008)]} 
0 31
  
(node coming up)
  2> 75064 INFO  (coreLoadExecutor-196-thread-1-processing-n:127.0.0.1:47239_) 
[n:127.0.0.1:47239_ c:collection1 s:shard3 r:core_node10 x:collection1] 
o.a.s.u.VersionInfo Refreshing highest value of _version_ for 256 version 
buckets from index
  2> 75065 INFO  (coreLoadExecutor-196-thread-1-processing-n:127.0.0.1:47239_) 
[n:127.0.0.1:47239_ c:collection1 s:shard3 r:core_node10 x:collection1] 
o.a.s.u.UpdateLog Took 31.0ms to seed version buckets with highest version 
1513033812159758336
  2> 75120 INFO  (coreZkRegister-190-thread-1-processing-n:127.0.0.1:47239_ 
x:collection1 s:shard3 c:collection1 r:core_node10) [n:127.0.0.1:47239_ 
c:collection1 s:shard3 r:core_node10 x:collection1] o.a.s.c.ZkController 
Replaying tlog for http://127.0.0.1:47239/collection1/ during startup... NOTE: 
This can take a while.
  2> 75162 DEBUG (recoveryExecutor-199-thread-1-processing-n:127.0.0.1:47239_ 
x:collection1 s:shard3 c:collection1 r:core_node10) [n:127.0.0.1:47239_ 
c:collection1 s:shard3 r:core_node10 x:collection1] o.a.s.u.UpdateLog add 
add{flags=a,_version_=1513033807303802880,id=1-3}
  2> 75162 DEBUG (recoveryExecutor-199-thread-1-processing-n:127.0.0.1:47239_ 
x:collection1 s:shard3 c:collection1 r:core_node10) [n:127.0.0.1:47239_ 
c:collection1 s:shard3 r:core_node10 x:collection1] 
o.a.s.u.p.LogUpdateProcessor PRE_UPDATE 
add{flags=a,_version_=1513033807303802880,id=1-3} 
LocalSolrQueryRequest{update.distrib=FROMLEADER_replay=true}

(replay finished)
  2> 75280 DEBUG (recoveryExecutor-199-thread-1-processing-n:127.0.0.1:47239_ 
x:collection1 s:shard3 c:collection1 r:core_node10) [n:127.0.0.1:47239_ 
c:collection1 s:shard3 r:core_node10 x:collection1] 
o.a.s.u.p.LogUpdateProcessor PRE_UPDATE 
add{flags=a,_version_=1513033812159758336,id=1-132} 
LocalSolrQueryRequest{update.distrib=FROMLEADER_replay=true}
 
(meanwhile, the leader is asking us to recover?)
  2> 75458 WARN  (updateExecutor-14-thread-5-processing-x:collection1 
r:core_node2 http:127.0.0.1:47239//collection1// n:127.0.0.1:38911_ 
s:shard3 c:collection1) [n:127.0.0.1:38911_ c:collection1 s:shard3 r:core_node2 
x:collection1] o.a.s.c.LeaderInitiatedRecoveryThread Asking core=collection1 
coreNodeName=core_node10 on http://127.0.0.1:47239 to recover; unsuccessful 
after 2 of 120 attempts so far ...
  
(and we see the request to recover)
  2> 75475 INFO  (qtp2087242119-1282) [n:127.0.0.1:47239_] 
o.a.s.h.a.CoreAdminHandler It has been requested that we recover: 
core=collection1

(so we cancel the existing recovery)
  2> 75478 INFO  (Thread-1246) [n:127.0.0.1:47239_ c:collection1 s:shard3 
r:core_node10 x:collection1] o.a.s.u.DefaultSolrCoreState Running recovery - 
first canceling any ongoing recovery

  2> 75552 INFO  (RecoveryThread-collection1) [n:127.0.0.1:47239_ c:collection1 
s:shard3 r:core_node10 x:collection1] o.a.s.c.RecoveryStrategy Starting 
recovery process. recoveringAfterStartup=true

  2> 75610 INFO  (RecoveryThread-collection1) [n:127.0.0.1:47239_ c:collection1 
s:shard3 r:core_node10 x:collection1] o.a.s.c.RecoveryStrategy ## 
startupVersions=[1513033812159758336, [...]
  2> 75611 INFO  (RecoveryThread-collection1) [n:127.0.0.1:47239_ c:collection1 
s:shard3 r:core_node10 x:collection1] o.a.s.c.RecoveryStrategy Publishing state 
of core collection1 as recovering, leader is 
http://127.0.0.1:38911/collection1/ and I am http://127.0.0.1:47239/collection1/
  2> 75611 INFO  (RecoveryThread-collection1) 

[jira] [Updated] (SOLR-8085) ChaosMonkey Safe Leader Test fail with shard inconsistency.

2015-09-23 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8085:
---
Attachment: SOLR-8085.patch

OK, here's one possible patch I think.

> ChaosMonkey Safe Leader Test fail with shard inconsistency.
> ---
>
> Key: SOLR-8085
> URL: https://issues.apache.org/jira/browse/SOLR-8085
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Attachments: SOLR-8085.patch, fail.150922_125320, fail.150922_130608
>
>
> I've been discussing this fail I found with Yonik.
> The problem seems to be that a replica tries to recover and publishes 
> recovering - the attempt then fails, but docs are now coming in from the 
> leader. The replica tries to recover again and has gotten enough docs to pass 
> peery sync.
> I'm trying a possible solution now where we won't allow peer sync after a 
> recovery that is not successful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8087) Look into defensive check in publish that will not let a replica in LIR publish ACTIVE.

2015-09-23 Thread Mark Miller (JIRA)
Mark Miller created SOLR-8087:
-

 Summary: Look into defensive check in publish that will not let a 
replica in LIR publish ACTIVE.
 Key: SOLR-8087
 URL: https://issues.apache.org/jira/browse/SOLR-8087
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller


What I am worried about here is that if you hit this situation, how is the 
election canceled? It seems like perhaps the leader can't publish ACTIVE and 
then the shard is locked even if another replica could be leader?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8085) ChaosMonkey Safe Leader Test fail with shard inconsistency.

2015-09-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904653#comment-14904653
 ] 

Mark Miller commented on SOLR-8085:
---

bq. but note that "recoveringAfterStartup" is now false)

Yeah, lot's of ways to lose the class and start over - so if you really want a 
field to persist it has to be static.

> ChaosMonkey Safe Leader Test fail with shard inconsistency.
> ---
>
> Key: SOLR-8085
> URL: https://issues.apache.org/jira/browse/SOLR-8085
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Attachments: SOLR-8085.patch, fail.150922_125320, fail.150922_130608
>
>
> I've been discussing this fail I found with Yonik.
> The problem seems to be that a replica tries to recover and publishes 
> recovering - the attempt then fails, but docs are now coming in from the 
> leader. The replica tries to recover again and has gotten enough docs to pass 
> peery sync.
> I'm trying a possible solution now where we won't allow peer sync after a 
> recovery that is not successful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8085) ChaosMonkey Safe Leader Test fail with shard inconsistency.

2015-09-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904662#comment-14904662
 ] 

Mark Miller commented on SOLR-8085:
---

Looked at the patch - yeah, or put it on the core state :)

> ChaosMonkey Safe Leader Test fail with shard inconsistency.
> ---
>
> Key: SOLR-8085
> URL: https://issues.apache.org/jira/browse/SOLR-8085
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Attachments: SOLR-8085.patch, fail.150922_125320, fail.150922_130608
>
>
> I've been discussing this fail I found with Yonik.
> The problem seems to be that a replica tries to recover and publishes 
> recovering - the attempt then fails, but docs are now coming in from the 
> leader. The replica tries to recover again and has gotten enough docs to pass 
> peery sync.
> I'm trying a possible solution now where we won't allow peer sync after a 
> recovery that is not successful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2699 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2699/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 62261 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:785: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:665: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:652: Source checkout 
is dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 88 minutes 31 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects "fq" (filter query)

2015-09-23 Thread David Boychuck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904852#comment-14904852
 ] 

David Boychuck commented on SOLR-6066:
--

Joel, 

Would this changes also fix the problems described in SOLR-6345?

> CollapsingQParserPlugin + Elevation does not respects "fq" (filter query) 
> --
>
> Key: SOLR-6066
> URL: https://issues.apache.org/jira/browse/SOLR-6066
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 4.8
>Reporter: Herb Jiang
>Assignee: Joel Bernstein
> Fix For: 4.9
>
> Attachments: SOLR-6066.patch, SOLR-6066.patch, SOLR-6066.patch, 
> TestCollapseQParserPlugin.java
>
>
> QueryElevationComponent respects the "fq" parameter. But when use 
> CollapsingQParserPlugin with QueryElevationComponent, additional "fq" has no 
> effect.
> I use following test case to show this issue. (It will failed)
> {code:java}
> String[] doc = {"id","1", "term_s", "", "group_s", "group1", 
> "category_s", "cat2", "test_ti", "5", "test_tl", "10", "test_tf", "2000"};
> assertU(adoc(doc));
> assertU(commit());
> String[] doc1 = {"id","2", "term_s","", "group_s", "group1", 
> "category_s", "cat2", "test_ti", "50", "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc1));
> String[] doc2 = {"id","3", "term_s", "", "test_ti", "5000", 
> "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc2));
> assertU(commit());
> String[] doc3 = {"id","4", "term_s", "", "test_ti", "500", "test_tl", 
> "1000", "test_tf", "2000"};
> assertU(adoc(doc3));
> String[] doc4 = {"id","5", "term_s", "", "group_s", "group2", 
> "category_s", "cat1", "test_ti", "4", "test_tl", "10", "test_tf", "2000"};
> assertU(adoc(doc4));
> assertU(commit());
> String[] doc5 = {"id","6", "term_s","", "group_s", "group2", 
> "category_s", "cat1", "test_ti", "10", "test_tl", "100", "test_tf", "200"};
> assertU(adoc(doc5));
> assertU(commit());
> //Test additional filter query when using collapse
> params = new ModifiableSolrParams();
> params.add("q", "");
> params.add("fq", "{!collapse field=group_s}");
> params.add("fq", "category_s:cat1");
> params.add("defType", "edismax");
> params.add("bf", "field(test_ti)");
> params.add("qf", "term_s");
> params.add("qt", "/elevate");
> params.add("elevateIds", "2");
> assertQ(req(params), "*[count(//doc)=1]",
> "//result/doc[1]/float[@name='id'][.='6.0']");
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6813) OfflineSorter.sort isn't thread-safe

2015-09-23 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904712#comment-14904712
 ] 

Michael McCandless commented on LUCENE-6813:


bq. I don't fully understand the problem but to me OfflineSorter is thread safe 

Sorry I'm still trying to isolate exactly what the issue is ... I'll fixup the 
issue title once I have more of a clue.

I think the problem is (maybe) that {{OfflineSorter.sort}} currently removes 
its output path well before writing to it, and so if the caller is relying on 
{{Files.createTempFile}} to "pick" a unique filename across threads, which BKD 
is doing, then this can illegally re-use the same output Path across threads.

But I'm not certain this is the problem, I need to get the thread test online 
to see if I can repro/understand outside of BKD's usage.

bq. Also, this looks suspicious to me in OfflineSorter:

If I remove that {{try/catch}} then {{Files.move}} is angry because it cannot 
be ATOMIC_MOVE across volumes ... can I just remove the ATOMIC_MOVE option (and 
the {{try/catch}})?  Why must this be atomic?

> OfflineSorter.sort isn't thread-safe
> 
>
> Key: LUCENE-6813
> URL: https://issues.apache.org/jira/browse/LUCENE-6813
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6813.patch
>
>
> The new BKD tree classes, and NumericRangeTree (just a 1D BKD tree),
> make heavy use of OfflineSorter to build their data structures at
> indexing time when the number of indexed documents is biggish.
> But when I was first building them (LUCENE-6477), I hit a thread
> safety issue in OfflineSorter, and at that time I just worked around
> it by creating my own private temp directory each time I need to write
> a BKD tree.
> This workaround is sort of messy, and it causes problems with "pending
> delete" files on Windows when we try to remove that temp directory,
> causing test failures like 
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5149/
> I think instead we should fix the root cause ... i.e. make
> OfflineSorter thread safe.  It looks like it's simple...
> Separately I'd like to somehow fix these BKD tests to catch any leaked
> file handles ... I'm not sure they are today.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8068) NPE in SolrDispatchFilter.authenticateRequest

2015-09-23 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904735#comment-14904735
 ] 

Anshum Gupta commented on SOLR-8068:


I agree that if the core container didn't initialize properly, we should fail 
fast and I actually thought that happened. Seems like I was wrong on that. I'll 
update the patch here to fail sooner than in the authentication wrapper.

> NPE in SolrDispatchFilter.authenticateRequest
> -
>
> Key: SOLR-8068
> URL: https://issues.apache.org/jira/browse/SOLR-8068
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Markus Jelsma
> Fix For: 5.4
>
> Attachments: SOLR-8068.patch, SOLR-8068.patch, 
> solr-core-5.3.0-SNAPSHOT.jar
>
>
> Suddenly, one of our Solr 5.3 nodes responds with the following trace when i 
> send a delete all query via SolrJ.
> {code}
> java.lang.NullPointerException
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:237)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:186)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8081) When creating a collection, we need a way to utilize multiple disks available on a node.

2015-09-23 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-8081.
--
Resolution: Won't Fix

I'll go ahead and close this for now and go with the suggestion that users 
should just use RAID to present a single volume to Solr for now

Thanks for the input Jan and Noble

> When creating a collection, we need a way to utilize multiple disks available 
> on a node.
> 
>
> Key: SOLR-8081
> URL: https://issues.apache.org/jira/browse/SOLR-8081
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Timothy Potter
>
> Currently, if I want to change the dataDir for a core (such as to utilize 
> multiple disks on a Solr node), I need to either setup a symlink or change 
> the dataDir property in core.properties and restart the Solr node. For 
> instance, if I have a 4-node SolrCloud cluster and want to create a 
> collection with 4 shards with rf=2, then 8 Solr cores will be created across 
> the cluster, 2 per node. If I want to have each core use a separate disk, 
> then I have to do that after the fact. I'm aware that I could create the 
> collection with rf=1 and then use AddReplica to add additional replicas with 
> a different dataDir set, but that feels cumbersome as well.
> What would be nice is to have a way for me to specify available disks and 
> have Solr use that information when provisioning cores on the node. 
> [~anshumg] mentioned this might be best accomplished with a replica placement 
> strategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 73 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/73/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CdcrRequestHandlerTest.doTest

Error Message:
expected:<[dis]abled> but was:<[en]abled>

Stack Trace:
org.junit.ComparisonFailure: expected:<[dis]abled> but was:<[en]abled>
at 
__randomizedtesting.SeedInfo.seed([9E69AB6C4C6C1E4E:392D13C821D70DF7]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.assertState(BaseCdcrDistributedZkTest.java:289)
at 
org.apache.solr.cloud.CdcrRequestHandlerTest.doTestBufferActions(CdcrRequestHandlerTest.java:138)
at 
org.apache.solr.cloud.CdcrRequestHandlerTest.doTest(CdcrRequestHandlerTest.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:873)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:775)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:809)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:820)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60) - Build # 14283 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14283/
Java: 64bit/jdk1.8.0_60 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 61390 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:775: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:655: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:642: Source checkout 
is dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 62 minutes 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

checkJavadocLinks.py fails with Python 3.5.0

2015-09-23 Thread Ahmet Arslan
Hi,

In effort to run "ant precommit" I have installed Python 3.5.0.
However, it fails with the following :

[exec]   File 
"/Volumes/data/workspace/solr-trunk/dev-tools/scripts/checkJavadocLinks.py", 
line 20, in 
[exec] from html.parser import HTMLParser, HTMLParseError
[exec] ImportError: cannot import name 'HTMLParseError'


Python 3.5.0 (v3.5.0:374f501f4567, Sep 12 2015, 11:00:19) 
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin

I tried to solve this by myself, found something like :
"HTMLParserError have been removed from python3.5" 

Any suggestions given that i am python ignorant?

Thanks,
Ahmet

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8068) NPE in SolrDispatchFilter.authenticateRequest

2015-09-23 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904740#comment-14904740
 ] 

Ishan Chattopadhyaya commented on SOLR-8068:


[~anshumg] Can you please review my last patch here? I think it is doing the 
right thing, unless I've missed something more.

> NPE in SolrDispatchFilter.authenticateRequest
> -
>
> Key: SOLR-8068
> URL: https://issues.apache.org/jira/browse/SOLR-8068
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Markus Jelsma
> Fix For: 5.4
>
> Attachments: SOLR-8068.patch, SOLR-8068.patch, 
> solr-core-5.3.0-SNAPSHOT.jar
>
>
> Suddenly, one of our Solr 5.3 nodes responds with the following trace when i 
> send a delete all query via SolrJ.
> {code}
> java.lang.NullPointerException
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:237)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:186)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8068) NPE in SolrDispatchFilter.authenticateRequest

2015-09-23 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904771#comment-14904771
 ] 

Anshum Gupta commented on SOLR-8068:


Ah sorry, I actually thought you'd uploaded the exact same patch as mine (they 
were posted just too close). I don't think we need the check in HttpSolrCall 
but should just fail sooner.

I'll commit the check in SDF.

> NPE in SolrDispatchFilter.authenticateRequest
> -
>
> Key: SOLR-8068
> URL: https://issues.apache.org/jira/browse/SOLR-8068
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Markus Jelsma
> Fix For: 5.4
>
> Attachments: SOLR-8068.patch, SOLR-8068.patch, 
> solr-core-5.3.0-SNAPSHOT.jar
>
>
> Suddenly, one of our Solr 5.3 nodes responds with the following trace when i 
> send a delete all query via SolrJ.
> {code}
> java.lang.NullPointerException
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:237)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:186)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8075) Leader Initiated Recovery should not stop a leader that participated in an election with all of it's replicas from becoming a valid leader.

2015-09-23 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8075:
--
Attachment: SOLR-8075.patch

Patch getting ready for commit - adds a comment and only clears LIR if the 
leader is in LIR.

> Leader Initiated Recovery should not stop a leader that participated in an 
> election with all of it's replicas from becoming a valid leader.
> ---
>
> Key: SOLR-8075
> URL: https://issues.apache.org/jira/browse/SOLR-8075
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8075.patch, SOLR-8075.patch, SOLR-8075.patch, 
> SOLR-8075.patch
>
>
> Currently, because of SOLR-8069, all the replicas in a shard can be put into 
> LIR.
> If you restart such a shard, the valid leader will will win the election and 
> sync with the shard and then be blocked from registering as ACTIVE because it 
> is in LIR.
> I think that is a little wonky because I don't think it even tries another 
> candidate because the leader that cannot publish ACTIVE does not have it's 
> election canceled.
> While SOLR-8069 should prevent this situation, we should add logic to allow a 
> leader that can sync with it's full shard to become leader and publish ACTIVE 
> regardless of LIR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-09-23 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904784#comment-14904784
 ] 

Karl Wright commented on LUCENE-6699:
-

no time, I'm afraid...

> Integrate lat/lon BKD and spatial3d
> ---
>
> Key: LUCENE-6699
> URL: https://issues.apache.org/jira/browse/LUCENE-6699
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.4
>
> Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch
>
>
> I'm opening this for discussion, because I'm not yet sure how to do
> this integration, because of my ignorance about spatial in general and
> spatial3d in particular :)
> Our BKD tree impl is very fast at doing lat/lon shape intersection
> (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
> points.
> I think to integrate with spatial3d, we would first need to record
> lat/lon/z into doc values.  Somewhere I saw discussion about how we
> could stuff all 3 into a single long value with acceptable precision
> loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
> to do the fast per-hit query time filtering.
> But, second: what do we index into the BKD tree?  Can we "just" index
> earth surface lat/lon, and then at query time is spatial3d able to
> give me an enclosing "surface lat/lon" bbox for a 3d shape?  Or
> ... must we index all 3 dimensions into the BKD tree (seems like this
> could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Solaris (64bit/jdk1.8.0) - Build # 75 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/75/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 62145 lines...]
BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/build.xml:785: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/build.xml:665: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-5.x-Solaris/build.xml:652: Source 
checkout is dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 89 minutes 40 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: checkJavadocLinks.py fails with Python 3.5.0

2015-09-23 Thread Michael McCandless
Looks like you can't be strict when parsing HTML anymore in Python3.5:
http://bugs.python.org/issue15114

I'll fix checkJavadocLinks...

Mike McCandless

http://blog.mikemccandless.com


On Wed, Sep 23, 2015 at 2:58 PM, Alan Woodward  wrote:
> I hit this a couple of weeks back, when homebrew automatically upgraded me
> to python 3.5.  I have a separate python 3.2 installation, and added this
> line to ~/build.properties:
>
> python32.exe=/path/to/python3.2
>
> Alan Woodward
> www.flax.co.uk
>
>
> On 23 Sep 2015, at 18:06, Ahmet Arslan wrote:
>
> Hi,
>
> In effort to run "ant precommit" I have installed Python 3.5.0.
> However, it fails with the following :
>
> [exec]   File
> "/Volumes/data/workspace/solr-trunk/dev-tools/scripts/checkJavadocLinks.py",
> line 20, in 
> [exec] from html.parser import HTMLParser, HTMLParseError
> [exec] ImportError: cannot import name 'HTMLParseError'
>
>
> Python 3.5.0 (v3.5.0:374f501f4567, Sep 12 2015, 11:00:19)
> [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
>
> I tried to solve this by myself, found something like :
> "HTMLParserError have been removed from python3.5"
>
> Any suggestions given that i am python ignorant?
>
> Thanks,
> Ahmet
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6480) Extend Simple GeoPointField Type to 3d

2015-09-23 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6480:
---
Attachment: MortonEncoding3D.java

This issue will be revisited once all in-flight GeoPointField and BKD issues 
are resolved.  In the meantime, I am attaching my bit twiddling code that 
encodes a 3D GeoPoint into a 96 bit encoding scheme for anyone that wants to 
tinker w/ 3D BKD or GPF.  

A snapshot of the encoding/decoding performance is provided below:
{noformat}
Avg computation: 95.05450009122806 ns  Trials: 28500  Total time: 
27090.532526 ms
Avg computation: 95.02972751724138 ns  Trials: 29000  Total time: 
27558.62098 ms
Avg computation: 95.12489473898304 ns  Trials: 29500  Total time: 
28061.843948 ms
Avg computation: 95.15410407 ns  Trials: 3  Total time: 28546.231221 ms
Avg computation: 95.3290865737705 ns  Trials: 30500  Total time: 
29075.371405 ms
{noformat}

> Extend Simple GeoPointField Type to 3d 
> ---
>
> Key: LUCENE-6480
> URL: https://issues.apache.org/jira/browse/LUCENE-6480
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Nicholas Knize
> Attachments: MortonEncoding3D.java
>
>
> [LUCENE-6450 | https://issues.apache.org/jira/browse/LUCENE-6450] proposes a 
> simple GeoPointField type to lucene core. This field uses 64bit encoding of 2 
> dimensional points to construct sorted term representations of GeoPoints 
> (aka: GeoHashing).
> This feature investigates adding support for encoding 3 dimensional 
> GeoPoints, either by extending GeoPointField to a Geo3DPointField or adding 
> an additional 3d constructor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6813) OfflineSorter.sort isn't thread-safe

2015-09-23 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905093#comment-14905093
 ] 

Dawid Weiss commented on LUCENE-6813:
-

bq. I think the problem is (maybe) that OfflineSorter.sort currently removes 
its output path well before writing to it, and so if the caller is relying on 
Files.createTempFile to "pick" a unique filename across threads, which BKD is 
doing, then this can illegally re-use the same output Path across threads.

Ok, I think I understand you now. In that case indeed OfflineSorter.sort 
shouldn't be removing the output path and calling Files.move with 
REPLACE_EXISTING. I don't think an atomic move is required (since we don't care 
about other processes observing partially moved/copied file).


> OfflineSorter.sort isn't thread-safe
> 
>
> Key: LUCENE-6813
> URL: https://issues.apache.org/jira/browse/LUCENE-6813
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.4
>
> Attachments: LUCENE-6813.patch
>
>
> The new BKD tree classes, and NumericRangeTree (just a 1D BKD tree),
> make heavy use of OfflineSorter to build their data structures at
> indexing time when the number of indexed documents is biggish.
> But when I was first building them (LUCENE-6477), I hit a thread
> safety issue in OfflineSorter, and at that time I just worked around
> it by creating my own private temp directory each time I need to write
> a BKD tree.
> This workaround is sort of messy, and it causes problems with "pending
> delete" files on Windows when we try to remove that temp directory,
> causing test failures like 
> http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5149/
> I think instead we should fix the root cause ... i.e. make
> OfflineSorter thread safe.  It looks like it's simple...
> Separately I'd like to somehow fix these BKD tests to catch any leaked
> file handles ... I'm not sure they are today.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8085) ChaosMonkey Safe Leader Test fail with shard inconsistency.

2015-09-23 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905123#comment-14905123
 ] 

Yonik Seeley commented on SOLR-8085:


Yeah, It can't be static because each core needs it's own state.
We could also maintain it as a normal variable in RecoveryStrategy and either 
reuse RecoveryStrategy objects, or initialize future objects from past objects. 
 Thoughts on best approach?

> ChaosMonkey Safe Leader Test fail with shard inconsistency.
> ---
>
> Key: SOLR-8085
> URL: https://issues.apache.org/jira/browse/SOLR-8085
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Attachments: SOLR-8085.patch, fail.150922_125320, fail.150922_130608
>
>
> I've been discussing this fail I found with Yonik.
> The problem seems to be that a replica tries to recover and publishes 
> recovering - the attempt then fails, but docs are now coming in from the 
> leader. The replica tries to recover again and has gotten enough docs to pass 
> peery sync.
> I'm trying a possible solution now where we won't allow peer sync after a 
> recovery that is not successful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 802 - Still Failing

2015-09-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/802/

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
157 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=1523, 
name=qtp1251467147-1523, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=326, 
name=searcherExecutor-159-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=2721, 
name=RecoveryThread-awholynewstresscollection_collection4_2_shard8_replica2, 
state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:511)
 at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:227)   
 4) Thread[id=2963, name=RecoveryThread-awholynewcollection_0_shard3_replica1, 
state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:511)
 at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:227)   
 5) Thread[id=512, name=searcherExecutor-284-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)6) Thread[id=550, 
name=searcherExecutor-325-thread-1, state=WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)7) Thread[id=1612, 
name=qtp1170530931-1612, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:389) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:531)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$700(QueuedThreadPool.java:47)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:590) 
at 

[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-09-23 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905040#comment-14905040
 ] 

Michael McCandless commented on LUCENE-6699:


[~shebiki] maybe it's this comment?  
https://issues.apache.org/jira/browse/LUCENE-6480?focusedCommentId=14543396=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14543396

> Integrate lat/lon BKD and spatial3d
> ---
>
> Key: LUCENE-6699
> URL: https://issues.apache.org/jira/browse/LUCENE-6699
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.4
>
> Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch
>
>
> I'm opening this for discussion, because I'm not yet sure how to do
> this integration, because of my ignorance about spatial in general and
> spatial3d in particular :)
> Our BKD tree impl is very fast at doing lat/lon shape intersection
> (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
> points.
> I think to integrate with spatial3d, we would first need to record
> lat/lon/z into doc values.  Somewhere I saw discussion about how we
> could stuff all 3 into a single long value with acceptable precision
> loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
> to do the fast per-hit query time filtering.
> But, second: what do we index into the BKD tree?  Can we "just" index
> earth surface lat/lon, and then at query time is spatial3d able to
> give me an enclosing "surface lat/lon" bbox for a 3d shape?  Or
> ... must we index all 3 dimensions into the BKD tree (seems like this
> could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-09-23 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905063#comment-14905063
 ] 

Nicholas Knize commented on LUCENE-6699:


[~shebiki] I updated the Geo3DPacking code some time ago to avoid the overhead 
of BitSet and use raw morton bit twiddling. The intent is to use it in 
LUCENE-6480. Since that issue has stalled a bit I went ahead and attached the 
standalone class (with benchmarks) to the LUCENE-6480 issue if you're 
interested in tinkering.

> Integrate lat/lon BKD and spatial3d
> ---
>
> Key: LUCENE-6699
> URL: https://issues.apache.org/jira/browse/LUCENE-6699
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.4
>
> Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch
>
>
> I'm opening this for discussion, because I'm not yet sure how to do
> this integration, because of my ignorance about spatial in general and
> spatial3d in particular :)
> Our BKD tree impl is very fast at doing lat/lon shape intersection
> (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
> points.
> I think to integrate with spatial3d, we would first need to record
> lat/lon/z into doc values.  Somewhere I saw discussion about how we
> could stuff all 3 into a single long value with acceptable precision
> loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
> to do the fast per-hit query time filtering.
> But, second: what do we index into the BKD tree?  Can we "just" index
> earth surface lat/lon, and then at query time is spatial3d able to
> give me an enclosing "surface lat/lon" bbox for a 3d shape?  Or
> ... must we index all 3 dimensions into the BKD tree (seems like this
> could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-09-23 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize updated LUCENE-6699:
---
Comment: was deleted

(was: [~shebiki] I updated the Geo3DPacking code some time ago to avoid the 
overhead of BitSet and use raw morton bit twiddling. The intent is to use it in 
LUCENE-6480. Since that issue has stalled a bit I went ahead and attached the 
standalone class (with benchmarks) to the LUCENE-6480 issue if you're 
interested in tinkering.)

> Integrate lat/lon BKD and spatial3d
> ---
>
> Key: LUCENE-6699
> URL: https://issues.apache.org/jira/browse/LUCENE-6699
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.4
>
> Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch
>
>
> I'm opening this for discussion, because I'm not yet sure how to do
> this integration, because of my ignorance about spatial in general and
> spatial3d in particular :)
> Our BKD tree impl is very fast at doing lat/lon shape intersection
> (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
> points.
> I think to integrate with spatial3d, we would first need to record
> lat/lon/z into doc values.  Somewhere I saw discussion about how we
> could stuff all 3 into a single long value with acceptable precision
> loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
> to do the fast per-hit query time filtering.
> But, second: what do we index into the BKD tree?  Can we "just" index
> earth surface lat/lon, and then at query time is spatial3d able to
> give me an enclosing "surface lat/lon" bbox for a 3d shape?  Or
> ... must we index all 3 dimensions into the BKD tree (seems like this
> could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-09-23 Thread Nicholas Knize (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905064#comment-14905064
 ] 

Nicholas Knize commented on LUCENE-6699:


[~shebiki] I updated the Geo3DPacking code some time ago to avoid the overhead 
of BitSet and use raw morton bit twiddling. The intent is to use it in 
LUCENE-6480. Since that issue has stalled a bit I went ahead and attached the 
standalone class (with benchmarks) to the LUCENE-6480 issue if you're 
interested in tinkering.

> Integrate lat/lon BKD and spatial3d
> ---
>
> Key: LUCENE-6699
> URL: https://issues.apache.org/jira/browse/LUCENE-6699
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: Trunk, 5.4
>
> Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
> LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch
>
>
> I'm opening this for discussion, because I'm not yet sure how to do
> this integration, because of my ignorance about spatial in general and
> spatial3d in particular :)
> Our BKD tree impl is very fast at doing lat/lon shape intersection
> (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
> points.
> I think to integrate with spatial3d, we would first need to record
> lat/lon/z into doc values.  Somewhere I saw discussion about how we
> could stuff all 3 into a single long value with acceptable precision
> loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
> to do the fast per-hit query time filtering.
> But, second: what do we index into the BKD tree?  Can we "just" index
> earth surface lat/lon, and then at query time is spatial3d able to
> give me an enclosing "surface lat/lon" bbox for a 3d shape?  Or
> ... must we index all 3 dimensions into the BKD tree (seems like this
> could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 407 - Still Failing

2015-09-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/407/

All tests passed

Build Log:
[...truncated 61560 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/build.xml:775:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/build.xml:655:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-trunk-Java8/build.xml:642:
 Source checkout is dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 76 minutes 31 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Sending artifact delta relative to Lucene-Solr-Tests-trunk-Java8 #405
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 464 bytes
Compression is 0.0%
Took 0.1 sec
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60) - Build # 13993 - Failure!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13993/
Java: 64bit/jdk1.8.0_60 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 62377 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:785: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:665: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:652: Source checkout is 
dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 60 minutes 49 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 2698 - Failure!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2698/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 62277 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:785: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:665: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:652: Source checkout 
is dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 94 minutes 32 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 406 - Failure

2015-09-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/406/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:44042/_xi/mo","node_name":"127.0.0.1:44042__xi%2Fmo","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={   "replicationFactor":"3",   
"shards":{"shard1":{   "range":"8000-7fff",   "state":"active", 
  "replicas":{ "core_node1":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:50789/_xi/mo;,   
"node_name":"127.0.0.1:50789__xi%2Fmo",   "state":"down"}, 
"core_node2":{   "state":"down",   
"base_url":"http://127.0.0.1:56491/_xi/mo;,   
"core":"c8n_1x3_lf_shard1_replica2",   
"node_name":"127.0.0.1:56491__xi%2Fmo"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:44042/_xi/mo;,   
"node_name":"127.0.0.1:44042__xi%2Fmo",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica3","base_url":"http://127.0.0.1:44042/_xi/mo","node_name":"127.0.0.1:44042__xi%2Fmo","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:50789/_xi/mo;,
  "node_name":"127.0.0.1:50789__xi%2Fmo",
  "state":"down"},
"core_node2":{
  "state":"down",
  "base_url":"http://127.0.0.1:56491/_xi/mo;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:56491__xi%2Fmo"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:44042/_xi/mo;,
  "node_name":"127.0.0.1:44042__xi%2Fmo",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([A6E29BB5B66CCD0C:2EB6A46F1890A0F4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:166)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:864)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:900)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:914)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 

[jira] [Commented] (SOLR-8081) When creating a collection, we need a way to utilize multiple disks available on a node.

2015-09-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904107#comment-14904107
 ] 

Noble Paul commented on SOLR-8081:
--

bq.such as if we need to allocate 3 replicas on 2 disks, pick the disk to put 
the 3rd replica on based on disk capacity.

Usually, when the replicas are created , the disks would be empty. If you are 
creating a replica after disks are filled up (partially) itmay make sense.

bq. I suppose this is not something that needs to have an API since you'll need 
to make sure the mount points for the disks have the correct perms.

I'm not sure if we can do this automatically. How does solr know where in the 
mo9unt point should the datadir be?

What if we don't want Solr to use certain disks?

> When creating a collection, we need a way to utilize multiple disks available 
> on a node.
> 
>
> Key: SOLR-8081
> URL: https://issues.apache.org/jira/browse/SOLR-8081
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Timothy Potter
>
> Currently, if I want to change the dataDir for a core (such as to utilize 
> multiple disks on a Solr node), I need to either setup a symlink or change 
> the dataDir property in core.properties and restart the Solr node. For 
> instance, if I have a 4-node SolrCloud cluster and want to create a 
> collection with 4 shards with rf=2, then 8 Solr cores will be created across 
> the cluster, 2 per node. If I want to have each core use a separate disk, 
> then I have to do that after the fact. I'm aware that I could create the 
> collection with rf=1 and then use AddReplica to add additional replicas with 
> a different dataDir set, but that feels cumbersome as well.
> What would be nice is to have a way for me to specify available disks and 
> have Solr use that information when provisioning cores on the node. 
> [~anshumg] mentioned this might be best accomplished with a replica placement 
> strategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 964 - Still Failing

2015-09-23 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/964/

1 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=110251, name=collection1, 
state=RUNNABLE, group=TGRP-HdfsCollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=110251, name=collection1, state=RUNNABLE, 
group=TGRP-HdfsCollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:37705: Could not find collection : 
awholynewstresscollection_collection1_0
at __randomizedtesting.SeedInfo.seed([225CBFC8AC184BB8]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1099)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:895)




Build Log:
[...truncated 10971 lines...]
   [junit4] JVM J2: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/temp/junit4-J2-20150923_002555_558.sysout
   [junit4] >>> JVM J2: stdout (verbatim) 
   [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] Dumping heap to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/heapdumps/java_pid26872.hprof
 ...
   [junit4] Heap dump file created [682232877 bytes in 8.791 secs]
   [junit4] <<< JVM J2: EOF 

   [junit4] JVM J2: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/temp/junit4-J2-20150923_002555_558.syserr
   [junit4] >>> JVM J2: stderr (verbatim) 
   [junit4] WARN: Unhandled exception in event serialization. -> 
java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] at java.nio.CharBuffer.wrap(CharBuffer.java:369)
   [junit4] at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:265)
   [junit4] at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125)
   [junit4] at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:135)
   [junit4] at java.io.OutputStreamWriter.write(OutputStreamWriter.java:220)
   [junit4] at java.io.Writer.write(Writer.java:157)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonWriter.string(JsonWriter.java:561)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.dependencies.com.google.gson.stream.JsonWriter.value(JsonWriter.java:419)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.Serializer.flushQueue(Serializer.java:101)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.events.Serializer.serialize(Serializer.java:83)
   [junit4] at 
com.carrotsearch.ant.tasks.junit4.slave.SlaveMain$3$2.write(SlaveMain.java:457)
   [junit4] at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
   [junit4] at 
java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
   [junit4] at java.io.PrintStream.flush(PrintStream.java:338)
   [junit4] at java.io.FilterOutputStream.flush(FilterOutputStream.java:140)
   [junit4] at java.io.PrintStream.write(PrintStream.java:482)
   [junit4] at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
   [junit4] at 
sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
   [junit4] at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
   [junit4] at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
   [junit4] at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
   [junit4] at 
org.apache.log4j.helpers.QuietWriter.flush(QuietWriter.java:59)
   [junit4] at 
org.apache.log4j.WriterAppender.subAppend(WriterAppender.java:324)
   [junit4] at 
org.apache.log4j.WriterAppender.append(WriterAppender.java:162)
   [junit4] at 
org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
   [junit4] at 
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66)
   [junit4]

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60) - Build # 14279 - Failure!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14279/
Java: 64bit/jdk1.8.0_60 -XX:+UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 61683 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:775: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:655: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:642: Source checkout 
is dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 59 minutes 54 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Reopened] (SOLR-7730) speed-up faceting on doc values fields

2015-09-23 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reopened SOLR-7730:

  Assignee: Mikhail Khludnev

{{SlowCompositeReaderWrapper }} in {{lucene-core-5.3.0.jar}} still has slow 
implementation  
  
{code}
public SortedDocValues getSortedDocValues(java.lang.String)
...
 105: aload_0
 106: invokevirtual #39 // Method 
getFieldInfos:()Lorg/apache/lucene/index/FieldInfos;
 109: aload_1
 110: invokevirtual #40 // Method 
org/apache/lucene/index/FieldInfos.fieldInfo:(Ljava/lang/String;)Lorg/apache/lucene/index/FieldInfo;
 113: invokevirtual #41 // Method 
org/apache/lucene/index/FieldInfo.getDocValuesType:()Lorg/apache/lucene/index/DocValuesType;
 116: getstatic #42 // Field 
org/apache/lucene/index/DocValuesType.SORTED:Lorg/apache/lucene/index/DocValuesType;
 119: if_acmpeq 124
 122: aconst_null
 123: areturn
...
{code}

I missed something. 

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.3
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7730) speed-up faceting on doc values fields

2015-09-23 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-7730:
---
Fix Version/s: (was: 5.3)
   5.3.1

> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.3.1
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 72 - Failure!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/72/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 61643 lines...]
BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-trunk-Solaris/build.xml:775: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-trunk-Solaris/build.xml:655: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-trunk-Solaris/build.xml:642: Source 
checkout is dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 94 minutes 34 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (SOLR-7730) speed-up faceting on doc values fields

2015-09-23 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved SOLR-7730.

   Resolution: Fixed
Fix Version/s: (was: 5.3.1)
   5.4

Got it. 5.3 branch was cut before commit. see
https://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_5_3/lucene/core/src/java/org/apache/lucene/index/SlowCompositeReaderWrapper.java?view=log
Thus, this optimization will be available only at 5.4. I'm sorry for confusion. 


> speed-up faceting on doc values fields
> --
>
> Key: SOLR-7730
> URL: https://issues.apache.org/jira/browse/SOLR-7730
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 5.2.1
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: patch
> Fix For: 5.4
>
> Attachments: LUCENE-7730.patch, LUCENE-7730.patch, SOLR-7730.patch
>
>
> every time we count facets on DocValues fields in Solr on many segments index 
> we see the unnecessary hotspot:
> {code}
> 
> at 
> org.apache.lucene.index.MultiFields.getMergedFieldInfos(MultiFields.java:248)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getFieldInfos(SlowCompositeReaderWrapper.java:239)
> at 
> org.apache.lucene.index.SlowCompositeReaderWrapper.getSortedSetDocValues(SlowCompositeReaderWrapper.java:176)
> at 
> org.apache.solr.request.DocValuesFacets.getCounts(DocValuesFacets.java:72)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:460) 
> {code}
> the reason is SlowCompositeReaderWrapper.getSortedSetDocValues() Line 136 and 
> SlowCompositeReaderWrapper.getSortedDocValues() Line 174
> before return composite doc values, SCWR merges segment field infos, which is 
> expensive, but after fieldinfo is merged, it checks *only* docvalue type in 
> it. This dv type check can be done much easier in per segment basis. 
> This patch gets some performance gain for those who count DV facets in Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b78) - Build # 13996 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13996/
Java: 64bit/jdk1.9.0-ea-b78 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 53149 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:785: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:665: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:652: Source checkout is 
dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 54 minutes 28 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: checkJavadocLinks.py fails with Python 3.5.0

2015-09-23 Thread Alan Woodward
I hit this a couple of weeks back, when homebrew automatically upgraded me to 
python 3.5.  I have a separate python 3.2 installation, and added this line to 
~/build.properties:

python32.exe=/path/to/python3.2

Alan Woodward
www.flax.co.uk


On 23 Sep 2015, at 18:06, Ahmet Arslan wrote:

> Hi,
> 
> In effort to run "ant precommit" I have installed Python 3.5.0.
> However, it fails with the following :
> 
> [exec]   File 
> "/Volumes/data/workspace/solr-trunk/dev-tools/scripts/checkJavadocLinks.py", 
> line 20, in 
> [exec] from html.parser import HTMLParser, HTMLParseError
> [exec] ImportError: cannot import name 'HTMLParseError'
> 
> 
> Python 3.5.0 (v3.5.0:374f501f4567, Sep 12 2015, 11:00:19) 
> [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
> 
> I tried to solve this by myself, found something like :
> "HTMLParserError have been removed from python3.5" 
> 
> Any suggestions given that i am python ignorant?
> 
> Thanks,
> Ahmet
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



[jira] [Reopened] (LUCENE-6810) Upgrade to Spatial4j 0.5

2015-09-23 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reopened LUCENE-6810:


Builds seem to be broken from this commit ... looks like Solr's .sha1 files 
need upgrading too?

{noformat}
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:785: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:665: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/build.xml:652: Source checkout 
is dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1
{noformat}

> Upgrade to Spatial4j 0.5
> 
>
> Key: LUCENE-6810
> URL: https://issues.apache.org/jira/browse/LUCENE-6810
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.4
>
> Attachments: LUCENE-6810_Spatial4j_0_5.patch
>
>
> Spatial4j 0.5 was released a few days ago.  There are some bug fixes, most of 
> which were surfaced via the tests here.  It also publishes the test jar 
> (thanks [~nknize] for that one) and with that there are a couple test 
> utilities here I can remove.
> https://github.com/locationtech/spatial4j/blob/master/CHANGES.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8068) NPE in SolrDispatchFilter.authenticateRequest

2015-09-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905269#comment-14905269
 ] 

ASF subversion and git services commented on SOLR-8068:
---

Commit 1704935 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1704935 ]

SOLR-8068: Check and throw exception in the SDF early if the core container 
wasn't initialized properly or is shutting down.

> NPE in SolrDispatchFilter.authenticateRequest
> -
>
> Key: SOLR-8068
> URL: https://issues.apache.org/jira/browse/SOLR-8068
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Markus Jelsma
> Fix For: 5.4
>
> Attachments: SOLR-8068.patch, SOLR-8068.patch, 
> solr-core-5.3.0-SNAPSHOT.jar
>
>
> Suddenly, one of our Solr 5.3 nodes responds with the following trace when i 
> send a delete all query via SolrJ.
> {code}
> java.lang.NullPointerException
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:237)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:186)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_60) - Build # 5280 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5280/
Java: 32bit/jdk1.8.0_60 -server -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 54746 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:775: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:655: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:642: Source 
checkout is dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 94 minutes 9 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-5.3-Linux (64bit/jdk1.8.0_60) - Build # 235 - Failure!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.3-Linux/235/
Java: 64bit/jdk1.8.0_60 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SolrCloudExampleTest

Error Message:
ERROR: SolrIndexSearcher opens=28 closes=27

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=28 closes=27
at __randomizedtesting.SeedInfo.seed([711B47E05BEBEC3E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:467)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:233)
at sun.reflect.GeneratedMethodAccessor40.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SolrCloudExampleTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.SolrCloudExampleTest: 
1) Thread[id=9490, name=searcherExecutor-4439-thread-1, state=WAITING, 
group=TGRP-SolrCloudExampleTest] at sun.misc.Unsafe.park(Native Method) 
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.SolrCloudExampleTest: 
   1) Thread[id=9490, name=searcherExecutor-4439-thread-1, state=WAITING, 
group=TGRP-SolrCloudExampleTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([711B47E05BEBEC3E]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SolrCloudExampleTest

Error Message:
There are still zombie threads 

[jira] [Commented] (SOLR-8068) NPE in SolrDispatchFilter.authenticateRequest

2015-09-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905375#comment-14905375
 ] 

ASF subversion and git services commented on SOLR-8068:
---

Commit 1704948 from [~anshumg] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1704948 ]

SOLR-8068: Check and throw exception in the SDF early if the core container 
wasn't initialized properly or is shutting down.(merge from trunk)

> NPE in SolrDispatchFilter.authenticateRequest
> -
>
> Key: SOLR-8068
> URL: https://issues.apache.org/jira/browse/SOLR-8068
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Markus Jelsma
> Fix For: 5.4
>
> Attachments: SOLR-8068.patch, SOLR-8068.patch, 
> solr-core-5.3.0-SNAPSHOT.jar
>
>
> Suddenly, one of our Solr 5.3 nodes responds with the following trace when i 
> send a delete all query via SolrJ.
> {code}
> java.lang.NullPointerException
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:237)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:186)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b78) - Build # 13997 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13997/
Java: 64bit/jdk1.9.0-ea-b78 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 53133 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:785: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:665: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:652: Source checkout is 
dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 56 minutes 0 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: checkJavadocLinks.py fails with Python 3.5.0

2015-09-23 Thread Ahmet Arslan
Thanks Mike, its working now.

Ahmet



On Wednesday, September 23, 2015 10:10 PM, Michael McCandless 
 wrote:
Looks like you can't be strict when parsing HTML anymore in Python3.5:
http://bugs.python.org/issue15114

I'll fix checkJavadocLinks...

Mike McCandless

http://blog.mikemccandless.com


On Wed, Sep 23, 2015 at 2:58 PM, Alan Woodward  wrote:
> I hit this a couple of weeks back, when homebrew automatically upgraded me
> to python 3.5.  I have a separate python 3.2 installation, and added this
> line to ~/build.properties:
>
> python32.exe=/path/to/python3.2
>
> Alan Woodward
> www.flax.co.uk
>
>
> On 23 Sep 2015, at 18:06, Ahmet Arslan wrote:
>
> Hi,
>
> In effort to run "ant precommit" I have installed Python 3.5.0.
> However, it fails with the following :
>
> [exec]   File
> "/Volumes/data/workspace/solr-trunk/dev-tools/scripts/checkJavadocLinks.py",
> line 20, in 
> [exec] from html.parser import HTMLParser, HTMLParseError
> [exec] ImportError: cannot import name 'HTMLParseError'
>
>
> Python 3.5.0 (v3.5.0:374f501f4567, Sep 12 2015, 11:00:19)
> [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
>
> I tried to solve this by myself, found something like :
> "HTMLParserError have been removed from python3.5"
>
> Any suggestions given that i am python ignorant?
>
> Thanks,
> Ahmet
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org

>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6810) Upgrade to Spatial4j 0.5

2015-09-23 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905197#comment-14905197
 ] 

David Smiley commented on LUCENE-6810:
--

ouch; thanks for bringing this to my attention! I’m traveling today but I
will have time to get to it at some point today if I’m not beaten to it.

On Wed, Sep 23, 2015 at 3:21 PM Michael McCandless (JIRA) 

-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


> Upgrade to Spatial4j 0.5
> 
>
> Key: LUCENE-6810
> URL: https://issues.apache.org/jira/browse/LUCENE-6810
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.4
>
> Attachments: LUCENE-6810_Spatial4j_0_5.patch
>
>
> Spatial4j 0.5 was released a few days ago.  There are some bug fixes, most of 
> which were surfaced via the tests here.  It also publishes the test jar 
> (thanks [~nknize] for that one) and with that there are a couple test 
> utilities here I can remove.
> https://github.com/locationtech/spatial4j/blob/master/CHANGES.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14284 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14284/
Java: 32bit/jdk1.8.0_60 -server -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 54721 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:775: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:655: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:642: Source checkout 
is dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 56 minutes 57 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8088) Distributed grouping seems to require docValues in 5.x, didn't in 4.x

2015-09-23 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905776#comment-14905776
 ] 

Varun Thacker commented on SOLR-8088:
-

HI Shawn,

I think this is the same issue as SOLR-7495 ?

> Distributed grouping seems to require docValues in 5.x, didn't in 4.x
> -
>
> Key: SOLR-8088
> URL: https://issues.apache.org/jira/browse/SOLR-8088
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>
> I have a field in my index that is lowercased after the KeywordTokenizer.  I 
> wish to do grouping on this field.  It is a distributed index.
> This works fine in Solr 4.9.1 running on Java 8.
> When I try the distributed grouping request (with the same schema) on Solr 
> 5.2.1, it fails, with this exception:
> {code}
> java.lang.IllegalStateException: unexpected docvalues type SORTED_SET for 
> field 'ip' (expected=SORTED). Use UninvertingReader or index with docvalues.
> {code}
> If I make the same request directly to one of the shards on 5.2.1, it works.  
> If I create a copyField to a field using StrField with docValues, the 
> distributed request works ... but then I lose the lowercasing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8081) When creating a collection, we need a way to utilize multiple disks available on a node.

2015-09-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904225#comment-14904225
 ] 

Jan Høydahl commented on SOLR-8081:
---

Is this really something for application-level Solr to worry about? Even the 
most novice IT manager knows how to deploy RAID...

> When creating a collection, we need a way to utilize multiple disks available 
> on a node.
> 
>
> Key: SOLR-8081
> URL: https://issues.apache.org/jira/browse/SOLR-8081
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Timothy Potter
>
> Currently, if I want to change the dataDir for a core (such as to utilize 
> multiple disks on a Solr node), I need to either setup a symlink or change 
> the dataDir property in core.properties and restart the Solr node. For 
> instance, if I have a 4-node SolrCloud cluster and want to create a 
> collection with 4 shards with rf=2, then 8 Solr cores will be created across 
> the cluster, 2 per node. If I want to have each core use a separate disk, 
> then I have to do that after the fact. I'm aware that I could create the 
> collection with rf=1 and then use AddReplica to add additional replicas with 
> a different dataDir set, but that feels cumbersome as well.
> What would be nice is to have a way for me to specify available disks and 
> have Solr use that information when provisioning cores on the node. 
> [~anshumg] mentioned this might be best accomplished with a replica placement 
> strategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8081) When creating a collection, we need a way to utilize multiple disks available on a node.

2015-09-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904227#comment-14904227
 ] 

Noble Paul commented on SOLR-8081:
--

deploying RAID is not what we are discussing. If you have a bunch of RAID disks 
how do you ensure that your Solr utilizes them properly 

> When creating a collection, we need a way to utilize multiple disks available 
> on a node.
> 
>
> Key: SOLR-8081
> URL: https://issues.apache.org/jira/browse/SOLR-8081
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Timothy Potter
>
> Currently, if I want to change the dataDir for a core (such as to utilize 
> multiple disks on a Solr node), I need to either setup a symlink or change 
> the dataDir property in core.properties and restart the Solr node. For 
> instance, if I have a 4-node SolrCloud cluster and want to create a 
> collection with 4 shards with rf=2, then 8 Solr cores will be created across 
> the cluster, 2 per node. If I want to have each core use a separate disk, 
> then I have to do that after the fact. I'm aware that I could create the 
> collection with rf=1 and then use AddReplica to add additional replicas with 
> a different dataDir set, but that feels cumbersome as well.
> What would be nice is to have a way for me to specify available disks and 
> have Solr use that information when provisioning cores on the node. 
> [~anshumg] mentioned this might be best accomplished with a replica placement 
> strategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8081) When creating a collection, we need a way to utilize multiple disks available on a node.

2015-09-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904250#comment-14904250
 ] 

Jan Høydahl commented on SOLR-8081:
---

Well, that's up to the IT guys that provision the servers by choosing a 
sutitable RAID level or other similar technology to present a large single, 
virtual volume to the application. Thus we do not need to worry how Solr 
utilize single disks, that is taken care of at OS level.

> When creating a collection, we need a way to utilize multiple disks available 
> on a node.
> 
>
> Key: SOLR-8081
> URL: https://issues.apache.org/jira/browse/SOLR-8081
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Timothy Potter
>
> Currently, if I want to change the dataDir for a core (such as to utilize 
> multiple disks on a Solr node), I need to either setup a symlink or change 
> the dataDir property in core.properties and restart the Solr node. For 
> instance, if I have a 4-node SolrCloud cluster and want to create a 
> collection with 4 shards with rf=2, then 8 Solr cores will be created across 
> the cluster, 2 per node. If I want to have each core use a separate disk, 
> then I have to do that after the fact. I'm aware that I could create the 
> collection with rf=1 and then use AddReplica to add additional replicas with 
> a different dataDir set, but that feels cumbersome as well.
> What would be nice is to have a way for me to specify available disks and 
> have Solr use that information when provisioning cores on the node. 
> [~anshumg] mentioned this might be best accomplished with a replica placement 
> strategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_60) - Build # 13994 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/13994/
Java: 32bit/jdk1.8.0_60 -client -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 55318 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:785: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:665: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:652: Source checkout is 
dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 60 minutes 38 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8081) When creating a collection, we need a way to utilize multiple disks available on a node.

2015-09-23 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14904273#comment-14904273
 ] 

Noble Paul commented on SOLR-8081:
--

I don't know if RAID works like that. Even if it does it is going to be 
inefficient. What if the same index is written to different physical disks ? 
Each request may need to hit multiple disks . 

I'm not very knowledgeable about this. But in my past job our System admin used 
to point one solr on each disk to ensure that all disks are utilized fairly

> When creating a collection, we need a way to utilize multiple disks available 
> on a node.
> 
>
> Key: SOLR-8081
> URL: https://issues.apache.org/jira/browse/SOLR-8081
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Reporter: Timothy Potter
>
> Currently, if I want to change the dataDir for a core (such as to utilize 
> multiple disks on a Solr node), I need to either setup a symlink or change 
> the dataDir property in core.properties and restart the Solr node. For 
> instance, if I have a 4-node SolrCloud cluster and want to create a 
> collection with 4 shards with rf=2, then 8 Solr cores will be created across 
> the cluster, 2 per node. If I want to have each core use a separate disk, 
> then I have to do that after the fact. I'm aware that I could create the 
> collection with rf=1 and then use AddReplica to add additional replicas with 
> a different dataDir set, but that feels cumbersome as well.
> What would be nice is to have a way for me to specify available disks and 
> have Solr use that information when provisioning cores on the node. 
> [~anshumg] mentioned this might be best accomplished with a replica placement 
> strategy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8068) NPE in SolrDispatchFilter.authenticateRequest

2015-09-23 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905413#comment-14905413
 ] 

Anshum Gupta commented on SOLR-8068:


Thanks everyone. I think this is sorted now.

> NPE in SolrDispatchFilter.authenticateRequest
> -
>
> Key: SOLR-8068
> URL: https://issues.apache.org/jira/browse/SOLR-8068
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Markus Jelsma
> Fix For: 5.4
>
> Attachments: SOLR-8068.patch, SOLR-8068.patch, 
> solr-core-5.3.0-SNAPSHOT.jar
>
>
> Suddenly, one of our Solr 5.3 nodes responds with the following trace when i 
> send a delete all query via SolrJ.
> {code}
> java.lang.NullPointerException
> at 
> org.apache.solr.servlet.SolrDispatchFilter.authenticateRequest(SolrDispatchFilter.java:237)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:186)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8085) ChaosMonkey Safe Leader Test fail with shard inconsistency.

2015-09-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905435#comment-14905435
 ] 

Mark Miller commented on SOLR-8085:
---

Could be a static map or something in RecoveryStrategy too - but seeing as we 
already store another variable like this in the default state, made a lot of 
sense to me.

With your patch and running on a patched version of 4.10.3, I was still only 
seeing one other type of fail.

Docs that came in during recovery after publishing recovering and before 
buffering would end up interfering with and causing a false peer sync pass if 
enough of them came in.

I seemed to have worked around this issue by buffering docs before peer sync 
and before publishing as RECOVERING (the signal for the leader to start sending 
updates).

With my current runs using no deletes, I have not yet found a fail after this 
on this version of the code.

> ChaosMonkey Safe Leader Test fail with shard inconsistency.
> ---
>
> Key: SOLR-8085
> URL: https://issues.apache.org/jira/browse/SOLR-8085
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Attachments: SOLR-8085.patch, fail.150922_125320, fail.150922_130608
>
>
> I've been discussing this fail I found with Yonik.
> The problem seems to be that a replica tries to recover and publishes 
> recovering - the attempt then fails, but docs are now coming in from the 
> leader. The replica tries to recover again and has gotten enough docs to pass 
> peery sync.
> I'm trying a possible solution now where we won't allow peer sync after a 
> recovery that is not successful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6810) Upgrade to Spatial4j 0.5

2015-09-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905551#comment-14905551
 ] 

ASF subversion and git services commented on LUCENE-6810:
-

Commit 1704969 from [~anshumg] in branch 'dev/trunk'
[ https://svn.apache.org/r1704969 ]

LUCENE-6810: Removing spatial4j-0.4.1.jar.sha1 from Solr

> Upgrade to Spatial4j 0.5
> 
>
> Key: LUCENE-6810
> URL: https://issues.apache.org/jira/browse/LUCENE-6810
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.4
>
> Attachments: LUCENE-6810_Spatial4j_0_5.patch
>
>
> Spatial4j 0.5 was released a few days ago.  There are some bug fixes, most of 
> which were surfaced via the tests here.  It also publishes the test jar 
> (thanks [~nknize] for that one) and with that there are a couple test 
> utilities here I can remove.
> https://github.com/locationtech/spatial4j/blob/master/CHANGES.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8085) ChaosMonkey Safe Leader Test fail with shard inconsistency.

2015-09-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905439#comment-14905439
 ] 

Mark Miller commented on SOLR-8085:
---

Should mention one other change as I am mostly testing Hdfs version of this 
chaosmonkey test - after talking to Yonik I also fixed an issue where because 
we don't have truncate support we were replaying buffered docs on fail to get 
past them - really we should not do that as it can lead to bad peer sync passes 
and I have a fix for that as well. I'll file a separate JIRA issue for that one.

> ChaosMonkey Safe Leader Test fail with shard inconsistency.
> ---
>
> Key: SOLR-8085
> URL: https://issues.apache.org/jira/browse/SOLR-8085
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
> Attachments: SOLR-8085.patch, fail.150922_125320, fail.150922_130608
>
>
> I've been discussing this fail I found with Yonik.
> The problem seems to be that a replica tries to recover and publishes 
> recovering - the attempt then fails, but docs are now coming in from the 
> leader. The replica tries to recover again and has gotten enough docs to pass 
> peery sync.
> I'm trying a possible solution now where we won't allow peer sync after a 
> recovery that is not successful.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8087) Look into defensive check in publish that will not let a replica in LIR publish ACTIVE.

2015-09-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905467#comment-14905467
 ] 

Mark Miller commented on SOLR-8087:
---

One thing I've seen happen with this in a case where a non leader was correctly 
put in LIR - when it tries to publish ACTIVE on recovery it trips this - we 
don't properly handle that exception in RecoveryStrategy and so we don't auto 
retry recovery like we should.

> Look into defensive check in publish that will not let a replica in LIR 
> publish ACTIVE.
> ---
>
> Key: SOLR-8087
> URL: https://issues.apache.org/jira/browse/SOLR-8087
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>
> What I am worried about here is that if you hit this situation, how is the 
> election canceled? It seems like perhaps the leader can't publish ACTIVE and 
> then the shard is locked even if another replica could be leader?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7438) Look into using new HDFS truncate feature in HdfsTransactionLog.

2015-09-23 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905473#comment-14905473
 ] 

Mark Miller commented on SOLR-7438:
---

bq.   // This is somewhat brittle, but current usage allows for it

Actually I don't think it does. I think this can create the opportunity for 
false peery sync success. We have to do something else until truncate arrives. 
I've discussed a possible workaround with Yonik and have been testing it.

> Look into using new HDFS truncate feature in HdfsTransactionLog.
> 
>
> Key: SOLR-7438
> URL: https://issues.apache.org/jira/browse/SOLR-7438
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
>
> Looks like truncate is added in 2.7.
> See HdfsTransactionLog:
> {code}
>   // HACK
>   // while waiting for HDFS-3107, instead of quickly
>   // dropping, we slowly apply
>   // This is somewhat brittle, but current usage
>   // allows for it
>   @Override
>   public boolean dropBufferedUpdates() {
> Future future = applyBufferedUpdates();
> if (future != null) {
>   try {
> future.get();
>   } catch (InterruptedException | ExecutionException e) {
> throw new RuntimeException(e);
>   }
> }
> return true;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_60) - Build # 5151 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5151/
Java: 32bit/jdk1.8.0_60 -client -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 55214 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:785: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:665: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:652: Source 
checkout is dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 85 minutes 24 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: checkJavadocLinks.py fails with Python 3.5.0

2015-09-23 Thread Michael McCandless
You're welcome!

Mike McCandless

http://blog.mikemccandless.com


On Wed, Sep 23, 2015 at 4:33 PM, Ahmet Arslan  wrote:
> Thanks Mike, its working now.
>
> Ahmet
>
>
>
> On Wednesday, September 23, 2015 10:10 PM, Michael McCandless 
>  wrote:
> Looks like you can't be strict when parsing HTML anymore in Python3.5:
> http://bugs.python.org/issue15114
>
> I'll fix checkJavadocLinks...
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Wed, Sep 23, 2015 at 2:58 PM, Alan Woodward  wrote:
>> I hit this a couple of weeks back, when homebrew automatically upgraded me
>> to python 3.5.  I have a separate python 3.2 installation, and added this
>> line to ~/build.properties:
>>
>> python32.exe=/path/to/python3.2
>>
>> Alan Woodward
>> www.flax.co.uk
>>
>>
>> On 23 Sep 2015, at 18:06, Ahmet Arslan wrote:
>>
>> Hi,
>>
>> In effort to run "ant precommit" I have installed Python 3.5.0.
>> However, it fails with the following :
>>
>> [exec]   File
>> "/Volumes/data/workspace/solr-trunk/dev-tools/scripts/checkJavadocLinks.py",
>> line 20, in 
>> [exec] from html.parser import HTMLParser, HTMLParseError
>> [exec] ImportError: cannot import name 'HTMLParseError'
>>
>>
>> Python 3.5.0 (v3.5.0:374f501f4567, Sep 12 2015, 11:00:19)
>> [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
>>
>> I tried to solve this by myself, found something like :
>> "HTMLParserError have been removed from python3.5"
>>
>> Any suggestions given that i am python ignorant?
>>
>> Thanks,
>> Ahmet
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>>
>>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2752 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2752/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ERROR: SolrIndexSearcher opens=51 closes=50

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50
at __randomizedtesting.SeedInfo.seed([E8D05DA9A5E3B734]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:467)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:233)
at sun.reflect.GeneratedMethodAccessor87.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1665)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=8666, name=searcherExecutor-3644-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=8666, name=searcherExecutor-3644-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([E8D05DA9A5E3B734]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=8666, 

[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60) - Build # 14285 - Still Failing!

2015-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/14285/
Java: 32bit/jdk1.8.0_60 -client -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 54728 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:775: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:655: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:642: Source checkout 
is dirty after running tests!!! Offending files:
* ./solr/licenses/spatial4j-0.4.1.jar.sha1

Total time: 60 minutes 37 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-6810) Upgrade to Spatial4j 0.5

2015-09-23 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14905549#comment-14905549
 ] 

Anshum Gupta commented on LUCENE-6810:
--

Seems like we just need to remove that file. {{spatial4j-0.5.jar.sha1}} already 
exists in Solr.
I'll delete and commit.

> Upgrade to Spatial4j 0.5
> 
>
> Key: LUCENE-6810
> URL: https://issues.apache.org/jira/browse/LUCENE-6810
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 5.4
>
> Attachments: LUCENE-6810_Spatial4j_0_5.patch
>
>
> Spatial4j 0.5 was released a few days ago.  There are some bug fixes, most of 
> which were surfaced via the tests here.  It also publishes the test jar 
> (thanks [~nknize] for that one) and with that there are a couple test 
> utilities here I can remove.
> https://github.com/locationtech/spatial4j/blob/master/CHANGES.md



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >