[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+158) - Build # 19045 - Still Unstable!

2017-02-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19045/
Java: 32bit/jdk-9-ea+158 -client -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at http://127.0.0.1:38067/solr/awhollynewcollection_0: 
Expected mime type application/octet-stream but got text/html.   
 
Error 510HTTP ERROR: 510 Problem 
accessing /solr/awhollynewcollection_0/select. Reason: 
{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg={awhollynewcollection_0:7},code=510}
 http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.14.v20161028   

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:38067/solr/awhollynewcollection_0: Expected 
mime type application/octet-stream but got text/html. 


Error 510 


HTTP ERROR: 510
Problem accessing /solr/awhollynewcollection_0/select. Reason:

{metadata={error-class=org.apache.solr.common.SolrException,root-error-class=org.apache.solr.common.SolrException},msg={awhollynewcollection_0:7},code=510}
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.14.v20161028



at 
__randomizedtesting.SeedInfo.seed([60782D5DED1F0D73:280D59E9EB2C22E6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:595)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:439)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:391)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1361)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1112)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1215)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1215)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1215)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1215)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1215)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:523)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:547)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 

[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+158) - Build # 2936 - Still Unstable!

2017-02-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2936/
Java: 64bit/jdk-9-ea+158 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.lucene.index.TestReadOnlyIndex.testReadOnlyIndex

Error Message:
access denied ("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/core/test/J1/temp/readonlyindex14470515049817716465"
 "read")

Stack Trace:
java.security.AccessControlException: access denied ("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/core/test/J1/temp/readonlyindex14470515049817716465"
 "read")
at 
__randomizedtesting.SeedInfo.seed([A6EDD861C1F0F4CC:1F68033F8933BADE]:0)
at 
java.base/java.security.AccessControlContext.checkPermission(AccessControlContext.java:471)
at 
java.base/java.security.AccessController.checkPermission(AccessController.java:894)
at 
java.base/java.lang.SecurityManager.checkPermission(SecurityManager.java:560)
at 
java.base/java.lang.SecurityManager.checkRead(SecurityManager.java:899)
at java.base/sun.nio.fs.UnixPath.checkRead(UnixPath.java:818)
at 
java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:395)
at java.base/java.nio.file.Files.newDirectoryStream(Files.java:460)
at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:215)
at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:234)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:672)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:77)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
at 
org.apache.lucene.index.TestReadOnlyIndex.doTestReadOnlyIndex(TestReadOnlyIndex.java:81)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at 
org.apache.lucene.util.LuceneTestCase.runWithRestrictedPermissions(LuceneTestCase.java:2816)
at 
org.apache.lucene.index.TestReadOnlyIndex.testReadOnlyIndex(TestReadOnlyIndex.java:69)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:547)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-6237) An option to have only leaders write and replicas read when using a shared file system with SolrCloud.

2017-02-24 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884047#comment-15884047
 ] 

Mark Miller commented on SOLR-6237:
---

[~tim.potter], where are you at with this? I can try and update my old checkout 
and push a branch if you want to start pushing this forward.

> An option to have only leaders write and replicas read when using a shared 
> file system with SolrCloud.
> --
>
> Key: SOLR-6237
> URL: https://issues.apache.org/jira/browse/SOLR-6237
> Project: Solr
>  Issue Type: New Feature
>  Components: hdfs, SolrCloud
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: 0001-unified.patch, SOLR-6237.patch, Unified Replication 
> Design.pdf
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3854 - Still Unstable!

2017-02-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3854/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.update.TestInPlaceUpdatesDistrib.test

Error Message:
Thread pool didn't terminate within 10 secs

Stack Trace:
java.lang.AssertionError: Thread pool didn't terminate within 10 secs
at 
__randomizedtesting.SeedInfo.seed([55FC32207A523345:DDA80DFAD4AE5EBD]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.delayedReorderingFetchesMissingUpdateFromLeaderTest(TestInPlaceUpdatesDistrib.java:815)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:142)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+158) - Build # 19044 - Unstable!

2017-02-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19044/
Java: 64bit/jdk-9-ea+158 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.lucene.index.TestReadOnlyIndex.testReadOnlyIndex

Error Message:
access denied ("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/J2/temp/readonlyindex10038765096431525403"
 "read")

Stack Trace:
java.security.AccessControlException: access denied ("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-master-Linux/lucene/build/core/test/J2/temp/readonlyindex10038765096431525403"
 "read")
at 
__randomizedtesting.SeedInfo.seed([3A0EF95CD9CCA6A6:838B2202910FE8B4]:0)
at 
java.base/java.security.AccessControlContext.checkPermission(AccessControlContext.java:471)
at 
java.base/java.security.AccessController.checkPermission(AccessController.java:894)
at 
java.base/java.lang.SecurityManager.checkPermission(SecurityManager.java:560)
at 
java.base/java.lang.SecurityManager.checkRead(SecurityManager.java:899)
at java.base/sun.nio.fs.UnixPath.checkRead(UnixPath.java:818)
at 
java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:395)
at java.base/java.nio.file.Files.newDirectoryStream(Files.java:460)
at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:215)
at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:234)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:646)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:77)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
at 
org.apache.lucene.index.TestReadOnlyIndex.doTestReadOnlyIndex(TestReadOnlyIndex.java:81)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at 
org.apache.lucene.util.LuceneTestCase.runWithRestrictedPermissions(LuceneTestCase.java:2815)
at 
org.apache.lucene.index.TestReadOnlyIndex.testReadOnlyIndex(TestReadOnlyIndex.java:69)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:547)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-24 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15884014#comment-15884014
 ] 

Erick Erickson commented on LUCENE-7705:


Actually, I didn't encounter the error in LowerCaseFilterFactory until I tried 
it out from a fully-compiled Solr instance with the maxTokenLen in the 
managed_schema file. I was thinking that it might make sense to add the 
maxTokenLen to a couple of the schemas used by some of the test cases, leaving 
it at the default value or 256 just to get some test coverage. I think this is 
really the difference between a test case at the Lucene level and one based on 
the schema from a Solr level.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10207) Harden CleanupOldIndexTest

2017-02-24 Thread Mark Miller (JIRA)
Mark Miller created SOLR-10207:
--

 Summary: Harden CleanupOldIndexTest
 Key: SOLR-10207
 URL: https://issues.apache.org/jira/browse/SOLR-10207
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10120) A SolrCore reload can remove the index from the previous SolrCore during replication index rollover.

2017-02-24 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-10120.

Resolution: Fixed

Nope, seems the CleanupOldIndexTest fails are not related to this issue.

> A SolrCore reload can remove the index from the previous SolrCore during 
> replication index rollover.
> 
>
> Key: SOLR-10120
> URL: https://issues.apache.org/jira/browse/SOLR-10120
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-10120.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 293 - Unstable

2017-02-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/293/

885 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([9838555B0BC9B54B]:0)
at 
org.apache.solr.core.CoreContainer.getNodeNameLocal(CoreContainer.java:605)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:481)
at org.apache.solr.util.TestHarness.(TestHarness.java:177)
at org.apache.solr.util.TestHarness.(TestHarness.java:140)
at org.apache.solr.util.TestHarness.(TestHarness.java:146)
at org.apache.solr.util.TestHarness.(TestHarness.java:109)
at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:741)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:731)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:559)
at 
org.apache.solr.analysis.TestFoldingMultitermExtrasQuery.beforeTests(TestFoldingMultitermExtrasQuery.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.analysis.TestFoldingMultitermExtrasQuery: 1) Thread[id=14, 
name=solr-idle-connections-evictor, state=TIMED_WAITING, 
group=TGRP-TestFoldingMultitermExtrasQuery] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.update.UpdateShardHandler$IdleConnectionsEvictor$1.run(UpdateShardHandler.java:258)
 at java.lang.Thread.run(Thread.java:745)2) Thread[id=13, 
name=solr-idle-connections-evictor, state=TIMED_WAITING, 
group=TGRP-TestFoldingMultitermExtrasQuery] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.update.UpdateShardHandler$IdleConnectionsEvictor$1.run(UpdateShardHandler.java:258)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.analysis.TestFoldingMultitermExtrasQuery: 
   1) Thread[id=14, name=solr-idle-connections-evictor, state=TIMED_WAITING, 
group=TGRP-TestFoldingMultitermExtrasQuery]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.update.UpdateShardHandler$IdleConnectionsEvictor$1.run(UpdateShardHandler.java:258)
at java.lang.Thread.run(Thread.java:745)
   2) Thread[id=13, name=solr-idle-connections-evictor, 

[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_121) - Build # 749 - Unstable!

2017-02-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/749/
Java: 64bit/jdk1.8.0_121 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Expected a collection with one shard and two replicas null Last available 
state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"MissingSegmentRecoveryTest_shard1_replica2",   
"base_url":"http://127.0.0.1:57829/solr;,   
"node_name":"127.0.0.1:57829_solr",   "state":"down"}, 
"core_node2":{   "core":"MissingSegmentRecoveryTest_shard1_replica1",   
"base_url":"http://127.0.0.1:57826/solr;,   
"node_name":"127.0.0.1:57826_solr",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected a collection with one shard and two replicas
null
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"MissingSegmentRecoveryTest_shard1_replica2",
  "base_url":"http://127.0.0.1:57829/solr;,
  "node_name":"127.0.0.1:57829_solr",
  "state":"down"},
"core_node2":{
  "core":"MissingSegmentRecoveryTest_shard1_replica1",
  "base_url":"http://127.0.0.1:57826/solr;,
  "node_name":"127.0.0.1:57826_solr",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([B5B702CE1E17DE35:E5E29ACD47366828]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:265)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Created] (SOLR-10206) ReplicationHandler should have improved logging when it writes 0 bytes for a file.

2017-02-24 Thread Mark Miller (JIRA)
Mark Miller created SOLR-10206:
--

 Summary: ReplicationHandler should have improved logging when it 
writes 0 bytes for a file.
 Key: SOLR-10206
 URL: https://issues.apache.org/jira/browse/SOLR-10206
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller


Currently something like this only shows up on the slave when it fails on 
seeing 0 bytes downloaded for a file. When this happens, it is hard to know 
exactly what happened on the master, so we should add some logging to make this 
clear.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-10120) A SolrCore reload can remove the index from the previous SolrCore during replication index rollover.

2017-02-24 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reopened SOLR-10120:


I think there is an issue with the index cleanup test after this. I need to 
investigate.

> A SolrCore reload can remove the index from the previous SolrCore during 
> replication index rollover.
> 
>
> Key: SOLR-10120
> URL: https://issues.apache.org/jira/browse/SOLR-10120
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-10120.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-24 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated LUCENE-7705:
-
Attachment: LUCENE-7705.patch

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-7705.patch, LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-24 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883979#comment-15883979
 ] 

Amrit Sarkar commented on LUCENE-7705:
--

Erick,

Successfully able to pass all the tests in the current patch uploaded with 
minor corrections and rectifications in exiting test-classes.

{noformat}
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/KeywordTokenizerFactory.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LetterTokenizer.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LetterTokenizerFactory.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LowerCaseTokenizer.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/LowerCaseTokenizerFactory.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/UnicodeWhitespaceTokenizer.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/WhitespaceTokenizer.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/core/WhitespaceTokenizerFactory.java
modified:   
lucene/analysis/common/src/java/org/apache/lucene/analysis/util/CharTokenizer.java
new file:   
lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestKeywordTokenizer.java
modified:   
lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestRandomChains.java
modified:   
lucene/analysis/common/src/test/org/apache/lucene/analysis/core/TestUnicodeWhitespaceTokenizer.java
modified:   
lucene/analysis/common/src/test/org/apache/lucene/analysis/util/TestCharTokenizers.java
{noformat}

Test failure fixes:

1. org.apache.lucene.analysis.core.TestRandomChains (suite):

   Added the four tokenizer constructors failing to brokenConstructors map to 
bypass them without delay.
This class tends to check what arguments is legal for the constructors and 
create certain maps before-hand to check later. It doesn't take account of 
boxing/unboxing of primitive data types; hence when we are taking parameter in 
_"java.lang.Integer"_, while creating map it is unboxing it into _"int"_ itself 
and then fails because _"int.class"_ and  _"java.lang.Integer.class"_ doesn't 
match which doesn't make sense. Either we can fix how the maps are getting 
created or we skip these constructors for now.

2.  the getMultiTermComponent method constructed a LowerCaseFilterFactory with 
the original arguments including maxTokenLen, which then threw an error:

  Not sure what corrected that, but I see no suite failing, not even 
TestFactories which I suppose was throwing the error for incompatible 
constructors/noMethodFound etc. Kindly verify if we are still facing the issue 
or we need to harden the test cases for the same.


> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+158) - Build # 2935 - Still Unstable!

2017-02-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2935/
Java: 32bit/jdk-9-ea+158 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.lucene.index.TestReadOnlyIndex.testReadOnlyIndex

Error Message:
access denied ("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/core/test/J0/temp/readonlyindex13466083482201246512"
 "read")

Stack Trace:
java.security.AccessControlException: access denied ("java.io.FilePermission" 
"/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/core/test/J0/temp/readonlyindex13466083482201246512"
 "read")
at 
__randomizedtesting.SeedInfo.seed([BEBF9CB2978804E9:73A47ECDF4B4AFB]:0)
at 
java.base/java.security.AccessControlContext.checkPermission(AccessControlContext.java:471)
at 
java.base/java.security.AccessController.checkPermission(AccessController.java:894)
at 
java.base/java.lang.SecurityManager.checkPermission(SecurityManager.java:560)
at 
java.base/java.lang.SecurityManager.checkRead(SecurityManager.java:899)
at java.base/sun.nio.fs.UnixPath.checkRead(UnixPath.java:818)
at 
java.base/sun.nio.fs.UnixFileSystemProvider.newDirectoryStream(UnixFileSystemProvider.java:395)
at java.base/java.nio.file.Files.newDirectoryStream(Files.java:460)
at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:215)
at org.apache.lucene.store.FSDirectory.listAll(FSDirectory.java:234)
at 
org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:672)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:77)
at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
at 
org.apache.lucene.index.TestReadOnlyIndex.doTestReadOnlyIndex(TestReadOnlyIndex.java:81)
at java.base/java.security.AccessController.doPrivileged(Native Method)
at 
org.apache.lucene.util.LuceneTestCase.runWithRestrictedPermissions(LuceneTestCase.java:2816)
at 
org.apache.lucene.index.TestReadOnlyIndex.testReadOnlyIndex(TestReadOnlyIndex.java:69)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:547)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Commented] (SOLR-10190) Potential NPE in CloudSolrClient when reading stale alias

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883909#comment-15883909
 ] 

ASF subversion and git services commented on SOLR-10190:


Commit 2d63916b70f2853787b545eda6681e64a2c2e352 in lucene-solr's branch 
refs/heads/branch_6_4 from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2d63916 ]

SOLR-10190: Fix NPE in CloudSolrClient when reading stale alias

This closes #160


> Potential NPE in CloudSolrClient when reading stale alias
> -
>
> Key: SOLR-10190
> URL: https://issues.apache.org/jira/browse/SOLR-10190
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrJ
>Affects Versions: 5.5, 6.x, master (7.0)
>Reporter: Janosch Woschitz
>Assignee: Tomás Fernández Löbbe
>
> The CloudSolrClient raises a NullPointerException when CloudSolrClient::add 
> is invoked and pointed to an alias which references a collection which does 
> not exist anymore.
> {code}
> java.lang.NullPointerException
> at __randomizedtesting.SeedInfo.seed([1D00539A964E5C5D:D7D145363AD5CCA]:0)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1078)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
> at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85)
> {code}
> This is a rather unexpected since the CloudSolrClient usually raises a 
> SolrException containing a descriptive error message (e.g. "Collection not 
> found: xyz") when a collection cannot be resolved.
> In general this error condition could be triggered also by other edge cases 
> since CloudSolrClient::getDocCollection might return null but the code 
> following that invocation is not guarded against null values.
> {code}
> // track the version of state we're using on the client side using 
> the _stateVer_ param
> DocCollection coll = getDocCollection(requestedCollection, null);
> int collVer = coll.getZNodeVersion();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10190) Potential NPE in CloudSolrClient when reading stale alias

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883910#comment-15883910
 ] 

ASF subversion and git services commented on SOLR-10190:


Commit 900367912f2e75c3171fcf64a8b73fd5e11f6098 in lucene-solr's branch 
refs/heads/branch_6_4 from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9003679 ]

SOLR-10190: Fixed assert message


> Potential NPE in CloudSolrClient when reading stale alias
> -
>
> Key: SOLR-10190
> URL: https://issues.apache.org/jira/browse/SOLR-10190
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrJ
>Affects Versions: 5.5, 6.x, master (7.0)
>Reporter: Janosch Woschitz
>Assignee: Tomás Fernández Löbbe
>
> The CloudSolrClient raises a NullPointerException when CloudSolrClient::add 
> is invoked and pointed to an alias which references a collection which does 
> not exist anymore.
> {code}
> java.lang.NullPointerException
> at __randomizedtesting.SeedInfo.seed([1D00539A964E5C5D:D7D145363AD5CCA]:0)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1078)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
> at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85)
> {code}
> This is a rather unexpected since the CloudSolrClient usually raises a 
> SolrException containing a descriptive error message (e.g. "Collection not 
> found: xyz") when a collection cannot be resolved.
> In general this error condition could be triggered also by other edge cases 
> since CloudSolrClient::getDocCollection might return null but the code 
> following that invocation is not guarded against null values.
> {code}
> // track the version of state we're using on the client side using 
> the _stateVer_ param
> DocCollection coll = getDocCollection(requestedCollection, null);
> int collVer = coll.getZNodeVersion();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10190) Potential NPE in CloudSolrClient when reading stale alias

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883906#comment-15883906
 ] 

ASF subversion and git services commented on SOLR-10190:


Commit f9d9ff94cf3863fdc9189ad3363c71662200ab58 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f9d9ff9 ]

SOLR-10190: Fix NPE in CloudSolrClient when reading stale alias

This closes #160


> Potential NPE in CloudSolrClient when reading stale alias
> -
>
> Key: SOLR-10190
> URL: https://issues.apache.org/jira/browse/SOLR-10190
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrJ
>Affects Versions: 5.5, 6.x, master (7.0)
>Reporter: Janosch Woschitz
>Assignee: Tomás Fernández Löbbe
>
> The CloudSolrClient raises a NullPointerException when CloudSolrClient::add 
> is invoked and pointed to an alias which references a collection which does 
> not exist anymore.
> {code}
> java.lang.NullPointerException
> at __randomizedtesting.SeedInfo.seed([1D00539A964E5C5D:D7D145363AD5CCA]:0)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1078)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
> at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85)
> {code}
> This is a rather unexpected since the CloudSolrClient usually raises a 
> SolrException containing a descriptive error message (e.g. "Collection not 
> found: xyz") when a collection cannot be resolved.
> In general this error condition could be triggered also by other edge cases 
> since CloudSolrClient::getDocCollection might return null but the code 
> following that invocation is not guarded against null values.
> {code}
> // track the version of state we're using on the client side using 
> the _stateVer_ param
> DocCollection coll = getDocCollection(requestedCollection, null);
> int collVer = coll.getZNodeVersion();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10190) Potential NPE in CloudSolrClient when reading stale alias

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883907#comment-15883907
 ] 

ASF subversion and git services commented on SOLR-10190:


Commit 1b91349fcd29afb931ea77299ac47a7c783b1532 in lucene-solr's branch 
refs/heads/branch_6x from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1b91349 ]

SOLR-10190: Fixed assert message


> Potential NPE in CloudSolrClient when reading stale alias
> -
>
> Key: SOLR-10190
> URL: https://issues.apache.org/jira/browse/SOLR-10190
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrJ
>Affects Versions: 5.5, 6.x, master (7.0)
>Reporter: Janosch Woschitz
>Assignee: Tomás Fernández Löbbe
>
> The CloudSolrClient raises a NullPointerException when CloudSolrClient::add 
> is invoked and pointed to an alias which references a collection which does 
> not exist anymore.
> {code}
> java.lang.NullPointerException
> at __randomizedtesting.SeedInfo.seed([1D00539A964E5C5D:D7D145363AD5CCA]:0)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1078)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
> at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85)
> {code}
> This is a rather unexpected since the CloudSolrClient usually raises a 
> SolrException containing a descriptive error message (e.g. "Collection not 
> found: xyz") when a collection cannot be resolved.
> In general this error condition could be triggered also by other edge cases 
> since CloudSolrClient::getDocCollection might return null but the code 
> following that invocation is not guarded against null values.
> {code}
> // track the version of state we're using on the client side using 
> the _stateVer_ param
> DocCollection coll = getDocCollection(requestedCollection, null);
> int collVer = coll.getZNodeVersion();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10190) Potential NPE in CloudSolrClient when reading stale alias

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883904#comment-15883904
 ] 

ASF subversion and git services commented on SOLR-10190:


Commit 99e8ef2304b67712d45a2393e649c5319aaac972 in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=99e8ef2 ]

SOLR-10190: Fixed assert message


> Potential NPE in CloudSolrClient when reading stale alias
> -
>
> Key: SOLR-10190
> URL: https://issues.apache.org/jira/browse/SOLR-10190
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrJ
>Affects Versions: 5.5, 6.x, master (7.0)
>Reporter: Janosch Woschitz
>Assignee: Tomás Fernández Löbbe
>
> The CloudSolrClient raises a NullPointerException when CloudSolrClient::add 
> is invoked and pointed to an alias which references a collection which does 
> not exist anymore.
> {code}
> java.lang.NullPointerException
> at __randomizedtesting.SeedInfo.seed([1D00539A964E5C5D:D7D145363AD5CCA]:0)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1078)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
> at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85)
> {code}
> This is a rather unexpected since the CloudSolrClient usually raises a 
> SolrException containing a descriptive error message (e.g. "Collection not 
> found: xyz") when a collection cannot be resolved.
> In general this error condition could be triggered also by other edge cases 
> since CloudSolrClient::getDocCollection might return null but the code 
> following that invocation is not guarded against null values.
> {code}
> // track the version of state we're using on the client side using 
> the _stateVer_ param
> DocCollection coll = getDocCollection(requestedCollection, null);
> int collVer = coll.getZNodeVersion();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10190) Potential NPE in CloudSolrClient when reading stale alias

2017-02-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883900#comment-15883900
 ] 

ASF GitHub Bot commented on SOLR-10190:
---

Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/160


> Potential NPE in CloudSolrClient when reading stale alias
> -
>
> Key: SOLR-10190
> URL: https://issues.apache.org/jira/browse/SOLR-10190
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrJ
>Affects Versions: 5.5, 6.x, master (7.0)
>Reporter: Janosch Woschitz
>Assignee: Tomás Fernández Löbbe
>
> The CloudSolrClient raises a NullPointerException when CloudSolrClient::add 
> is invoked and pointed to an alias which references a collection which does 
> not exist anymore.
> {code}
> java.lang.NullPointerException
> at __randomizedtesting.SeedInfo.seed([1D00539A964E5C5D:D7D145363AD5CCA]:0)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1078)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
> at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85)
> {code}
> This is a rather unexpected since the CloudSolrClient usually raises a 
> SolrException containing a descriptive error message (e.g. "Collection not 
> found: xyz") when a collection cannot be resolved.
> In general this error condition could be triggered also by other edge cases 
> since CloudSolrClient::getDocCollection might return null but the code 
> following that invocation is not guarded against null values.
> {code}
> // track the version of state we're using on the client side using 
> the _stateVer_ param
> DocCollection coll = getDocCollection(requestedCollection, null);
> int collVer = coll.getZNodeVersion();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #160: SOLR-10190 - Potential NPE in CloudSolrClient...

2017-02-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/160


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10190) Potential NPE in CloudSolrClient when reading stale alias

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883897#comment-15883897
 ] 

ASF subversion and git services commented on SOLR-10190:


Commit 39887b86297e36785607f57cfd0e785bcae3c61a in lucene-solr's branch 
refs/heads/master from Tomas Fernandez Lobbe
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=39887b8 ]

SOLR-10190: Fix NPE in CloudSolrClient when reading stale alias

This closes #160


> Potential NPE in CloudSolrClient when reading stale alias
> -
>
> Key: SOLR-10190
> URL: https://issues.apache.org/jira/browse/SOLR-10190
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java, SolrJ
>Affects Versions: 5.5, 6.x, master (7.0)
>Reporter: Janosch Woschitz
>Assignee: Tomás Fernández Löbbe
>
> The CloudSolrClient raises a NullPointerException when CloudSolrClient::add 
> is invoked and pointed to an alias which references a collection which does 
> not exist anymore.
> {code}
> java.lang.NullPointerException
> at __randomizedtesting.SeedInfo.seed([1D00539A964E5C5D:D7D145363AD5CCA]:0)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1078)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
> at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:106)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:71)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:85)
> {code}
> This is a rather unexpected since the CloudSolrClient usually raises a 
> SolrException containing a descriptive error message (e.g. "Collection not 
> found: xyz") when a collection cannot be resolved.
> In general this error condition could be triggered also by other edge cases 
> since CloudSolrClient::getDocCollection might return null but the code 
> following that invocation is not guarded against null values.
> {code}
> // track the version of state we're using on the client side using 
> the _stateVer_ param
> DocCollection coll = getDocCollection(requestedCollection, null);
> int collVer = coll.getZNodeVersion();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9640) Support PKI authentication and SSL in standalone-mode master/slave auth with local security.json

2017-02-24 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883805#comment-15883805
 ] 

Uwe Schindler commented on SOLR-9640:
-

bq.  Or do you know any fool-proof way of retrieving own host:port through 
outside of a request without peeking in the host and jetty.port vars?

Not really. I was checking ServletConfig/... but there is no host/port. The 
test-runner has its own host/port random generator. I think you can set the 
vars at the place where the in-process jetty is randomly started/configured.

> Support PKI authentication and SSL in standalone-mode master/slave auth with 
> local security.json
> 
>
> Key: SOLR-9640
> URL: https://issues.apache.org/jira/browse/SOLR-9640
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication, pki
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9640.patch, SOLR-9640.patch, SOLR-9640.patch, 
> SOLR-9640.patch, SOLR-9640.patch
>
>
> While working with SOLR-9481 I managed to secure Solr standalone on a 
> single-node server. However, when adding 
> {{=localhost:8081/solr/foo,localhost:8082/solr/foo}} to the request, I 
> get 401 error. This issue will fix PKI auth to work for standalone, which 
> should automatically make both sharding and master/slave index replication 
> work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: lucene-solr:master: SOLR-9640: Support PKI authentication and SSL in standalone-mode master/slave auth with local security.json

2017-02-24 Thread Uwe Schindler
Thanks!

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de  

eMail: u...@thetaphi.de

 

From: Jan Høydahl [mailto:jan@cominvent.com] 
Sent: Saturday, February 25, 2017 12:49 AM
To: dev@lucene.apache.org
Subject: Re: lucene-solr:master: SOLR-9640: Support PKI authentication and SSL 
in standalone-mode master/slave auth with local security.json

 

Reverted.

 

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com  

 

25. feb. 2017 kl. 00.13 skrev Jan Høydahl  >:

 

Looking…

 

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com  

 

24. feb. 2017 kl. 19.48 skrev Uwe Schindler  >:

 

I have the feeling this broke Jenkins. Millions of NPEs with JDK 8u121:

  
https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19042/console

130 test failures by NPE in 
org.apache.solr.core.CoreContainer.getNodeNameLocal()

-
Uwe Schindler
Achterdiek 19, D-28357 Bremen
  http://www.thetaphi.de
eMail:   u...@thetaphi.de




-Original Message-
From: jan...@apache.org   [mailto:jan...@apache.org]
Sent: Friday, February 24, 2017 2:31 PM
To: comm...@lucene.apache.org  
Subject: lucene-solr:master: SOLR-9640: Support PKI authentication and SSL
in standalone-mode master/slave auth with local security.json

Repository: lucene-solr
Updated Branches:
 refs/heads/master 5eeb8136f -> 95d6fc251


SOLR-9640: Support PKI authentication and SSL in standalone-mode
master/slave auth with local security.json


Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-
solr/commit/95d6fc25
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/95d6fc25
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/95d6fc25

Branch: refs/heads/master
Commit: 95d6fc2512d6525b2354165553f0d6cc4d0d6310
Parents: 5eeb813
Author: Jan HC8ydahl  >
Authored: Fri Feb 24 14:26:48 2017 +0100
Committer: Jan HC8ydahl  >
Committed: Fri Feb 24 14:30:42 2017 +0100

--
solr/CHANGES.txt|   2 +
.../org/apache/solr/core/CoreContainer.java |   9 +-
.../solr/security/PKIAuthenticationPlugin.java  |  42 +-
.../org/apache/solr/servlet/HttpSolrCall.java   |   4 +-
.../apache/solr/servlet/SolrDispatchFilter.java |  11 +-
.../solr/security/BasicAuthDistributedTest.java | 136 +++
.../security/TestPKIAuthenticationPlugin.java   |  38 +-
.../solr/BaseDistributedSearchTestCase.java |  37 -
8 files changed, 260 insertions(+), 19 deletions(-)
--


http://git-wip-us.apache.org/repos/asf/lucene-
solr/blob/95d6fc25/solr/CHANGES.txt
--
diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
index 0302615..2c5f0db 100644
--- a/solr/CHANGES.txt
+++ b/solr/CHANGES.txt
@@ -134,6 +134,8 @@ New Features
  field must both be stored=false, indexed=false, docValues=true. (Ishan
Chattopadhyaya, hossman, noble,
  shalin, yonik)

+* SOLR-9640: Support PKI authentication and SSL in standalone-mode
master/slave auth with local security.json (janhoy)
+
Bug Fixes
--


http://git-wip-us.apache.org/repos/asf/lucene-
solr/blob/95d6fc25/solr/core/src/java/org/apache/solr/core/CoreContainer.
java
--
diff --git a/solr/core/src/java/org/apache/solr/core/CoreContainer.java
b/solr/core/src/java/org/apache/solr/core/CoreContainer.java
index e3977d7..6115562 100644
--- a/solr/core/src/java/org/apache/solr/core/CoreContainer.java
+++ b/solr/core/src/java/org/apache/solr/core/CoreContainer.java
@@ -497,7 +497,9 @@ public class CoreContainer {
hostName = cfg.getNodeName();

zkSys.initZooKeeper(this, solrHome, cfg.getCloudConfig());
-if(isZooKeeperAware())  pkiAuthenticationPlugin = new
PKIAuthenticationPlugin(this, zkSys.getZkController().getNodeName());
+pkiAuthenticationPlugin = isZooKeeperAware() ?
+new PKIAuthenticationPlugin(this,
zkSys.getZkController().getNodeName()) :
+new PKIAuthenticationPlugin(this, getNodeNameLocal());

MDCLoggingContext.setNode(this);

@@ -618,6 +620,11 @@ public class CoreContainer {
}
  }

+  // Builds a node name to be used with PKIAuth.
+  private String getNodeNameLocal() {
+return
getConfig().getCloudConfig().getHost()+":"+getConfig().getCloudConfig().getS
olrHostPort()+"_solr";
+  

[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_121) - Build # 2934 - Still Unstable!

2017-02-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2934/
Java: 64bit/jdk1.8.0_121 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

875 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([77A13D10FA369A72]:0)
at 
org.apache.solr.core.CoreContainer.getNodeNameLocal(CoreContainer.java:605)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:481)
at org.apache.solr.util.TestHarness.(TestHarness.java:177)
at org.apache.solr.util.TestHarness.(TestHarness.java:140)
at org.apache.solr.util.TestHarness.(TestHarness.java:146)
at org.apache.solr.util.TestHarness.(TestHarness.java:109)
at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:741)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:731)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:559)
at 
org.apache.solr.analysis.TestFoldingMultitermExtrasQuery.beforeTests(TestFoldingMultitermExtrasQuery.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.analysis.TestFoldingMultitermExtrasQuery: 1) Thread[id=15, 
name=solr-idle-connections-evictor, state=TIMED_WAITING, 
group=TGRP-TestFoldingMultitermExtrasQuery] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.update.UpdateShardHandler$IdleConnectionsEvictor$1.run(UpdateShardHandler.java:258)
 at java.lang.Thread.run(Thread.java:745)2) Thread[id=16, 
name=solr-idle-connections-evictor, state=TIMED_WAITING, 
group=TGRP-TestFoldingMultitermExtrasQuery] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.update.UpdateShardHandler$IdleConnectionsEvictor$1.run(UpdateShardHandler.java:258)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.analysis.TestFoldingMultitermExtrasQuery: 
   1) Thread[id=15, name=solr-idle-connections-evictor, state=TIMED_WAITING, 
group=TGRP-TestFoldingMultitermExtrasQuery]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.update.UpdateShardHandler$IdleConnectionsEvictor$1.run(UpdateShardHandler.java:258)
at java.lang.Thread.run(Thread.java:745)

[jira] [Commented] (SOLR-9640) Support PKI authentication and SSL in standalone-mode master/slave auth with local security.json

2017-02-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883789#comment-15883789
 ] 

Jan Høydahl commented on SOLR-9640:
---

I think I might not have run the full test suite before push this time :( Need 
to dig further and harden failing method to work in all circumstances.
It was the cloudConfig object that was null for a bunch of tests. In an earlier 
comment on this issue I wrote
bq. Generating nodeName from host and port properties of CloudConfig, which 
seems a bit odd when not running cloud...
So that was true then. Will try to pull host, port and context from env.vars 
instead of from config object, and make sure that the test-runner also populate 
these vars if it does not already. Or do you know any fool-proof way of 
retrieving own host:port through outside of a request without peeking in the 
host and jetty.port vars?

> Support PKI authentication and SSL in standalone-mode master/slave auth with 
> local security.json
> 
>
> Key: SOLR-9640
> URL: https://issues.apache.org/jira/browse/SOLR-9640
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication, pki
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9640.patch, SOLR-9640.patch, SOLR-9640.patch, 
> SOLR-9640.patch, SOLR-9640.patch
>
>
> While working with SOLR-9481 I managed to secure Solr standalone on a 
> single-node server. However, when adding 
> {{=localhost:8081/solr/foo,localhost:8082/solr/foo}} to the request, I 
> get 401 error. This issue will fix PKI auth to work for standalone, which 
> should automatically make both sharding and master/slave index replication 
> work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9640) Support PKI authentication and SSL in standalone-mode master/slave auth with local security.json

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883777#comment-15883777
 ] 

ASF subversion and git services commented on SOLR-9640:
---

Commit dbcbdeb07f1090bfae99e2cde21df684b7f20a26 in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dbcbdeb ]

Revert "SOLR-9640: Support PKI authentication and SSL in standalone-mode 
master/slave auth with local security.json"

This reverts commit 024a39399dbb77678d06f70029575e0e66ded4b4.


> Support PKI authentication and SSL in standalone-mode master/slave auth with 
> local security.json
> 
>
> Key: SOLR-9640
> URL: https://issues.apache.org/jira/browse/SOLR-9640
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication, pki
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9640.patch, SOLR-9640.patch, SOLR-9640.patch, 
> SOLR-9640.patch, SOLR-9640.patch
>
>
> While working with SOLR-9481 I managed to secure Solr standalone on a 
> single-node server. However, when adding 
> {{=localhost:8081/solr/foo,localhost:8082/solr/foo}} to the request, I 
> get 401 error. This issue will fix PKI auth to work for standalone, which 
> should automatically make both sharding and master/slave index replication 
> work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: lucene-solr:master: SOLR-9640: Support PKI authentication and SSL in standalone-mode master/slave auth with local security.json

2017-02-24 Thread Jan Høydahl
Reverted.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 25. feb. 2017 kl. 00.13 skrev Jan Høydahl :
> 
> Looking…
> 
> --
> Jan Høydahl, search solution architect
> Cominvent AS - www.cominvent.com 
> 
>> 24. feb. 2017 kl. 19.48 skrev Uwe Schindler > >:
>> 
>> I have the feeling this broke Jenkins. Millions of NPEs with JDK 8u121:
>> 
>> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19042/console 
>> 
>> 
>> 130 test failures by NPE in 
>> org.apache.solr.core.CoreContainer.getNodeNameLocal()
>> 
>> -
>> Uwe Schindler
>> Achterdiek 19, D-28357 Bremen
>> http://www.thetaphi.de 
>> eMail: u...@thetaphi.de 
>> 
>>> -Original Message-
>>> From: jan...@apache.org  
>>> [mailto:jan...@apache.org ]
>>> Sent: Friday, February 24, 2017 2:31 PM
>>> To: comm...@lucene.apache.org 
>>> Subject: lucene-solr:master: SOLR-9640: Support PKI authentication and SSL
>>> in standalone-mode master/slave auth with local security.json
>>> 
>>> Repository: lucene-solr
>>> Updated Branches:
>>>  refs/heads/master 5eeb8136f -> 95d6fc251
>>> 
>>> 
>>> SOLR-9640: Support PKI authentication and SSL in standalone-mode
>>> master/slave auth with local security.json
>>> 
>>> 
>>> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo 
>>> 
>>> Commit: http://git-wip-us.apache.org/repos/asf/lucene- 
>>> 
>>> solr/commit/95d6fc25
>>> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/95d6fc25 
>>> 
>>> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/95d6fc25 
>>> 
>>> 
>>> Branch: refs/heads/master
>>> Commit: 95d6fc2512d6525b2354165553f0d6cc4d0d6310
>>> Parents: 5eeb813
>>> Author: Jan HC8ydahl >
>>> Authored: Fri Feb 24 14:26:48 2017 +0100
>>> Committer: Jan HC8ydahl >
>>> Committed: Fri Feb 24 14:30:42 2017 +0100
>>> 
>>> --
>>> solr/CHANGES.txt|   2 +
>>> .../org/apache/solr/core/CoreContainer.java |   9 +-
>>> .../solr/security/PKIAuthenticationPlugin.java  |  42 +-
>>> .../org/apache/solr/servlet/HttpSolrCall.java   |   4 +-
>>> .../apache/solr/servlet/SolrDispatchFilter.java |  11 +-
>>> .../solr/security/BasicAuthDistributedTest.java | 136 +++
>>> .../security/TestPKIAuthenticationPlugin.java   |  38 +-
>>> .../solr/BaseDistributedSearchTestCase.java |  37 -
>>> 8 files changed, 260 insertions(+), 19 deletions(-)
>>> --
>>> 
>>> 
>>> http://git-wip-us.apache.org/repos/asf/lucene- 
>>> 
>>> solr/blob/95d6fc25/solr/CHANGES.txt
>>> --
>>> diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
>>> index 0302615..2c5f0db 100644
>>> --- a/solr/CHANGES.txt
>>> +++ b/solr/CHANGES.txt
>>> @@ -134,6 +134,8 @@ New Features
>>>   field must both be stored=false, indexed=false, docValues=true. (Ishan
>>> Chattopadhyaya, hossman, noble,
>>>   shalin, yonik)
>>> 
>>> +* SOLR-9640: Support PKI authentication and SSL in standalone-mode
>>> master/slave auth with local security.json (janhoy)
>>> +
>>> Bug Fixes
>>> --
>>> 
>>> 
>>> http://git-wip-us.apache.org/repos/asf/lucene-
>>> solr/blob/95d6fc25/solr/core/src/java/org/apache/solr/core/CoreContainer.
>>> java
>>> --
>>> diff --git a/solr/core/src/java/org/apache/solr/core/CoreContainer.java
>>> b/solr/core/src/java/org/apache/solr/core/CoreContainer.java
>>> index e3977d7..6115562 100644
>>> --- a/solr/core/src/java/org/apache/solr/core/CoreContainer.java
>>> +++ b/solr/core/src/java/org/apache/solr/core/CoreContainer.java
>>> @@ -497,7 +497,9 @@ public class CoreContainer {
>>> hostName = cfg.getNodeName();
>>> 
>>> zkSys.initZooKeeper(this, solrHome, cfg.getCloudConfig());
>>> -if(isZooKeeperAware())  pkiAuthenticationPlugin = new
>>> PKIAuthenticationPlugin(this, zkSys.getZkController().getNodeName());
>>> +pkiAuthenticationPlugin = isZooKeeperAware() ?
>>> +new PKIAuthenticationPlugin(this,
>>> zkSys.getZkController().getNodeName()) :
>>> +new PKIAuthenticationPlugin(this, getNodeNameLocal());
>>> 
>>> MDCLoggingContext.setNode(this);
>>> 
>>> @@ -618,6 

[jira] [Commented] (SOLR-10188) wget command not working with full import SOLR 4.10

2017-02-24 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883774#comment-15883774
 ] 

Shawn Heisey commented on SOLR-10188:
-

This hasn't come up on the mailing list, and it's been two days.  For the sake 
of others who come across this issue in the future, I will respond with the 
solution.

URLs with "#" in them are only usable in an actual browser -- the web server 
(the servlet container that's running Solr, in this case) never sees that 
character or any other character that comes after it.  Those URLs will not work 
with other tools like a Solr client or wget.  Removing # is not enough -- the 
admin UI almost always has slightly different parameter/path syntax than the 
actual HTTP API.

Below is the wget command you'll need to initiate a full import.  The clean 
parameter defaults to true on full-import, so I removed it:

{noformat}
wget "http://server:8983/solr/collection1/dataimport?command=full-import;
{noformat}


> wget command not working with full import SOLR 4.10
> ---
>
> Key: SOLR-10188
> URL: https://issues.apache.org/jira/browse/SOLR-10188
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: ram Chaudhary
>
> Hi all,
> if i hit
>  
> http://:8983/solr/#/collection1/dataimport//dataimport?command=full-import=true
> Then it import all data 
> but if i doing same things with Linux command 
> wget http:// or 
> Localhost:8983/solr/#/collection1/dataimport//dataimport?command=full-import=true
> It is not importing the data
> Also i tried after remove the "#" from the url
> Also i tried with the like this too
> wget http:// or 
> Localhost:8983/solr/#/collection1/dataimport//dataimport?command=full-import\=true
> But it is not able to import the data..
> Can anyone help for it



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9640) Support PKI authentication and SSL in standalone-mode master/slave auth with local security.json

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883766#comment-15883766
 ] 

ASF subversion and git services commented on SOLR-9640:
---

Commit 30125f99daf38c4788a9763a89fddb3730c709af in lucene-solr's branch 
refs/heads/master from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=30125f9 ]

Revert "SOLR-9640: Support PKI authentication and SSL in standalone-mode 
master/slave auth with local security.json"

This reverts commit 95d6fc2512d6525b2354165553f0d6cc4d0d6310.


> Support PKI authentication and SSL in standalone-mode master/slave auth with 
> local security.json
> 
>
> Key: SOLR-9640
> URL: https://issues.apache.org/jira/browse/SOLR-9640
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication, pki
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9640.patch, SOLR-9640.patch, SOLR-9640.patch, 
> SOLR-9640.patch, SOLR-9640.patch
>
>
> While working with SOLR-9481 I managed to secure Solr standalone on a 
> single-node server. However, when adding 
> {{=localhost:8081/solr/foo,localhost:8082/solr/foo}} to the request, I 
> get 401 error. This issue will fix PKI auth to work for standalone, which 
> should automatically make both sharding and master/slave index replication 
> work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-master - Build # 1693 - Unstable

2017-02-24 Thread Steve Rowe
Looks like these failures are due at least in part to SOLR-9640 - I’ve 
commented there.

--
Steve
www.lucidworks.com

> On Feb 24, 2017, at 5:49 PM, Apache Jenkins Server 
>  wrote:
> 
> Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1693/
> 
> 1204 tests failed.
> FAILED:  org.apache.lucene.search.TestShardSearching.testSimple
> 
> Error Message:
> 
> 
> Stack Trace:
> java.lang.AssertionError
>   at 
> __randomizedtesting.SeedInfo.seed([2A1F64E90DB9131:3A12D2B0B72845E0]:0)
>   at org.apache.lucene.search.TopDocs.tieBreakLessThan(TopDocs.java:104)
>   at 
> org.apache.lucene.search.TopDocs$ScoreMergeSortQueue.lessThan(TopDocs.java:133)
>   at 
> org.apache.lucene.search.TopDocs$ScoreMergeSortQueue.lessThan(TopDocs.java:111)
>   at org.apache.lucene.util.PriorityQueue.upHeap(PriorityQueue.java:263)
>   at org.apache.lucene.util.PriorityQueue.add(PriorityQueue.java:140)
>   at org.apache.lucene.search.TopDocs.mergeAux(TopDocs.java:283)
>   at org.apache.lucene.search.TopDocs.merge(TopDocs.java:220)
>   at org.apache.lucene.search.TopDocs.merge(TopDocs.java:207)
>   at 
> org.apache.lucene.search.ShardSearchingTestBase$NodeState$ShardIndexSearcher.search(ShardSearchingTestBase.java:363)
>   at 
> org.apache.lucene.search.TestShardSearching.assertSame(TestShardSearching.java:310)
>   at 
> org.apache.lucene.search.TestShardSearching.testSimple(TestShardSearching.java:236)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
>   at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>   

[jira] [Commented] (SOLR-9640) Support PKI authentication and SSL in standalone-mode master/slave auth with local security.json

2017-02-24 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883726#comment-15883726
 ] 

Steve Rowe commented on SOLR-9640:
--

This is causing lots of failures on Jenkins. if on master I checkout the hash 
just before this was committed on master (5eeb813), the failures stop.

E.g.:

{noformat}
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1693/

1204 tests failed.
{noformat}

One of the failures:

{noformat}
  [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=ChangedSchemaMergeTest -Dtests.method=testOptimizeDiffSchemas 
-Dtests.seed=22D84C33C358DBB4 -Dtests.slow=true -Dtests.locale=es-BO 
-Dtests.timezone=Africa/Conakry -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] ERROR   0.02s J4  | ChangedSchemaMergeTest.testOptimizeDiffSchemas 
<<<
   [junit4]> Throwable #1: java.lang.NullPointerException
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([22D84C33C358DBB4:1943CAD796F5D3BB]:0)
   [junit4]>at 
org.apache.solr.core.CoreContainer.getNodeNameLocal(CoreContainer.java:625)
   [junit4]>at 
org.apache.solr.core.CoreContainer.load(CoreContainer.java:502)
   [junit4]>at 
org.apache.solr.schema.ChangedSchemaMergeTest.init(ChangedSchemaMergeTest.java:100)
   [junit4]>at 
org.apache.solr.schema.ChangedSchemaMergeTest.testOptimizeDiffSchemas(ChangedSchemaMergeTest.java:122)
{noformat}


> Support PKI authentication and SSL in standalone-mode master/slave auth with 
> local security.json
> 
>
> Key: SOLR-9640
> URL: https://issues.apache.org/jira/browse/SOLR-9640
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication, pki
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9640.patch, SOLR-9640.patch, SOLR-9640.patch, 
> SOLR-9640.patch, SOLR-9640.patch
>
>
> While working with SOLR-9481 I managed to secure Solr standalone on a 
> single-node server. However, when adding 
> {{=localhost:8081/solr/foo,localhost:8082/solr/foo}} to the request, I 
> get 401 error. This issue will fix PKI auth to work for standalone, which 
> should automatically make both sharding and master/slave index replication 
> work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: lucene-solr:master: SOLR-9640: Support PKI authentication and SSL in standalone-mode master/slave auth with local security.json

2017-02-24 Thread Jan Høydahl
Looking…

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 24. feb. 2017 kl. 19.48 skrev Uwe Schindler :
> 
> I have the feeling this broke Jenkins. Millions of NPEs with JDK 8u121:
> 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19042/console 
> 
> 
> 130 test failures by NPE in 
> org.apache.solr.core.CoreContainer.getNodeNameLocal()
> 
> -
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> http://www.thetaphi.de 
> eMail: u...@thetaphi.de 
> 
>> -Original Message-
>> From: jan...@apache.org  [mailto:jan...@apache.org 
>> ]
>> Sent: Friday, February 24, 2017 2:31 PM
>> To: comm...@lucene.apache.org 
>> Subject: lucene-solr:master: SOLR-9640: Support PKI authentication and SSL
>> in standalone-mode master/slave auth with local security.json
>> 
>> Repository: lucene-solr
>> Updated Branches:
>>  refs/heads/master 5eeb8136f -> 95d6fc251
>> 
>> 
>> SOLR-9640: Support PKI authentication and SSL in standalone-mode
>> master/slave auth with local security.json
>> 
>> 
>> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
>> Commit: http://git-wip-us.apache.org/repos/asf/lucene-
>> solr/commit/95d6fc25
>> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/95d6fc25
>> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/95d6fc25
>> 
>> Branch: refs/heads/master
>> Commit: 95d6fc2512d6525b2354165553f0d6cc4d0d6310
>> Parents: 5eeb813
>> Author: Jan HC8ydahl >
>> Authored: Fri Feb 24 14:26:48 2017 +0100
>> Committer: Jan HC8ydahl >
>> Committed: Fri Feb 24 14:30:42 2017 +0100
>> 
>> --
>> solr/CHANGES.txt|   2 +
>> .../org/apache/solr/core/CoreContainer.java |   9 +-
>> .../solr/security/PKIAuthenticationPlugin.java  |  42 +-
>> .../org/apache/solr/servlet/HttpSolrCall.java   |   4 +-
>> .../apache/solr/servlet/SolrDispatchFilter.java |  11 +-
>> .../solr/security/BasicAuthDistributedTest.java | 136 +++
>> .../security/TestPKIAuthenticationPlugin.java   |  38 +-
>> .../solr/BaseDistributedSearchTestCase.java |  37 -
>> 8 files changed, 260 insertions(+), 19 deletions(-)
>> --
>> 
>> 
>> http://git-wip-us.apache.org/repos/asf/lucene-
>> solr/blob/95d6fc25/solr/CHANGES.txt
>> --
>> diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
>> index 0302615..2c5f0db 100644
>> --- a/solr/CHANGES.txt
>> +++ b/solr/CHANGES.txt
>> @@ -134,6 +134,8 @@ New Features
>>   field must both be stored=false, indexed=false, docValues=true. (Ishan
>> Chattopadhyaya, hossman, noble,
>>   shalin, yonik)
>> 
>> +* SOLR-9640: Support PKI authentication and SSL in standalone-mode
>> master/slave auth with local security.json (janhoy)
>> +
>> Bug Fixes
>> --
>> 
>> 
>> http://git-wip-us.apache.org/repos/asf/lucene-
>> solr/blob/95d6fc25/solr/core/src/java/org/apache/solr/core/CoreContainer.
>> java
>> --
>> diff --git a/solr/core/src/java/org/apache/solr/core/CoreContainer.java
>> b/solr/core/src/java/org/apache/solr/core/CoreContainer.java
>> index e3977d7..6115562 100644
>> --- a/solr/core/src/java/org/apache/solr/core/CoreContainer.java
>> +++ b/solr/core/src/java/org/apache/solr/core/CoreContainer.java
>> @@ -497,7 +497,9 @@ public class CoreContainer {
>> hostName = cfg.getNodeName();
>> 
>> zkSys.initZooKeeper(this, solrHome, cfg.getCloudConfig());
>> -if(isZooKeeperAware())  pkiAuthenticationPlugin = new
>> PKIAuthenticationPlugin(this, zkSys.getZkController().getNodeName());
>> +pkiAuthenticationPlugin = isZooKeeperAware() ?
>> +new PKIAuthenticationPlugin(this,
>> zkSys.getZkController().getNodeName()) :
>> +new PKIAuthenticationPlugin(this, getNodeNameLocal());
>> 
>> MDCLoggingContext.setNode(this);
>> 
>> @@ -618,6 +620,11 @@ public class CoreContainer {
>> }
>>   }
>> 
>> +  // Builds a node name to be used with PKIAuth.
>> +  private String getNodeNameLocal() {
>> +return
>> getConfig().getCloudConfig().getHost()+":"+getConfig().getCloudConfig().getS
>> olrHostPort()+"_solr";
>> +  }
>> +
>>   public void securityNodeChanged() {
>> log.info("Security node changed, reloading security.json");
>> reloadSecurityProperties();
>> 
>> http://git-wip-us.apache.org/repos/asf/lucene-
>> solr/blob/95d6fc25/solr/core/src/java/org/apache/solr/security/PKIAuthenti
>> cationPlugin.java
>> 

[jira] [Commented] (SOLR-10177) Consolidate randomized usage of PointFields in schemas

2017-02-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883703#comment-15883703
 ] 

Tomás Fernández Löbbe commented on SOLR-10177:
--

bq. I think we should rename these two as: (a) "solr.tests.floatFieldType" as 
pfloat or float, when used on a per field basis, or (b) 
"solr.tests.floatClassName" as FloatPointField, when used in a fieldType 
definition.
+1. I originally was planning on using those those system properties to define 
the full class of the field type to use, but later I realized that was not 
possible because the different types would have different attributes in the 
schema definition. 

> Consolidate randomized usage of PointFields in schemas
> --
>
> Key: SOLR-10177
> URL: https://issues.apache.org/jira/browse/SOLR-10177
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>
> schema-inplace-updates.xml use per fieldType point fields randomization, 
> whereas other some schemas use per-field. However, the variable name is 
> similar and should be revisited and standardized across our tests.
> Discussions here 
> https://issues.apache.org/jira/browse/SOLR-5944?focusedCommentId=15875108=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15875108.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1693 - Unstable

2017-02-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1693/

1204 tests failed.
FAILED:  org.apache.lucene.search.TestShardSearching.testSimple

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([2A1F64E90DB9131:3A12D2B0B72845E0]:0)
at org.apache.lucene.search.TopDocs.tieBreakLessThan(TopDocs.java:104)
at 
org.apache.lucene.search.TopDocs$ScoreMergeSortQueue.lessThan(TopDocs.java:133)
at 
org.apache.lucene.search.TopDocs$ScoreMergeSortQueue.lessThan(TopDocs.java:111)
at org.apache.lucene.util.PriorityQueue.upHeap(PriorityQueue.java:263)
at org.apache.lucene.util.PriorityQueue.add(PriorityQueue.java:140)
at org.apache.lucene.search.TopDocs.mergeAux(TopDocs.java:283)
at org.apache.lucene.search.TopDocs.merge(TopDocs.java:220)
at org.apache.lucene.search.TopDocs.merge(TopDocs.java:207)
at 
org.apache.lucene.search.ShardSearchingTestBase$NodeState$ShardIndexSearcher.search(ShardSearchingTestBase.java:363)
at 
org.apache.lucene.search.TestShardSearching.assertSame(TestShardSearching.java:310)
at 
org.apache.lucene.search.TestShardSearching.testSimple(TestShardSearching.java:236)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  

[jira] [Closed] (LUCENE-7708) Track PositionLengthAttribute abuse

2017-02-24 Thread Jim Ferenczi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi closed LUCENE-7708.

Resolution: Fixed

> Track PositionLengthAttribute abuse
> ---
>
> Key: LUCENE-7708
> URL: https://issues.apache.org/jira/browse/LUCENE-7708
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser, modules/analysis
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7708.patch, LUCENE-7708.patch
>
>
> Some token filters uses the position length attribute of the token stream to 
> encode the number of terms they put in a single token. 
> This breaks the query parsing because it creates disconnected graph. 
> I've tracked down the abusive case to 2 candidates:
> * ShingleFilter which sets the position length attribute to the length of the 
> shingle.
> * CJKBigramFilter which always sets the position length attribute to 2.
> I don't think these filters should set the position length at all so the best 
> would be to remove the attribute from these token filters but this could 
> break BWC.
> Though this is a serious bug since shingles and cjk bigram now produce 
> invalid queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7708) Track PositionLengthAttribute abuse

2017-02-24 Thread Jim Ferenczi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883672#comment-15883672
 ] 

Jim Ferenczi edited comment on LUCENE-7708 at 2/24/17 10:49 PM:


Thanks [~steve_rowe] and [~mikemccand] !


was (Author: jim.ferenczi):
Thanks [~sar...@syr.edu] and [~mikemccand] !

> Track PositionLengthAttribute abuse
> ---
>
> Key: LUCENE-7708
> URL: https://issues.apache.org/jira/browse/LUCENE-7708
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser, modules/analysis
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7708.patch, LUCENE-7708.patch
>
>
> Some token filters uses the position length attribute of the token stream to 
> encode the number of terms they put in a single token. 
> This breaks the query parsing because it creates disconnected graph. 
> I've tracked down the abusive case to 2 candidates:
> * ShingleFilter which sets the position length attribute to the length of the 
> shingle.
> * CJKBigramFilter which always sets the position length attribute to 2.
> I don't think these filters should set the position length at all so the best 
> would be to remove the attribute from these token filters but this could 
> break BWC.
> Though this is a serious bug since shingles and cjk bigram now produce 
> invalid queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7708) Track PositionLengthAttribute abuse

2017-02-24 Thread Jim Ferenczi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883672#comment-15883672
 ] 

Jim Ferenczi commented on LUCENE-7708:
--

Thanks [~sar...@syr.edu] and [~mikemccand] !

> Track PositionLengthAttribute abuse
> ---
>
> Key: LUCENE-7708
> URL: https://issues.apache.org/jira/browse/LUCENE-7708
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser, modules/analysis
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7708.patch, LUCENE-7708.patch
>
>
> Some token filters uses the position length attribute of the token stream to 
> encode the number of terms they put in a single token. 
> This breaks the query parsing because it creates disconnected graph. 
> I've tracked down the abusive case to 2 candidates:
> * ShingleFilter which sets the position length attribute to the length of the 
> shingle.
> * CJKBigramFilter which always sets the position length attribute to 2.
> I don't think these filters should set the position length at all so the best 
> would be to remove the attribute from these token filters but this could 
> break BWC.
> Though this is a serious bug since shingles and cjk bigram now produce 
> invalid queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7708) Track PositionLengthAttribute abuse

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883671#comment-15883671
 ] 

ASF subversion and git services commented on LUCENE-7708:
-

Commit 6c63df0b15f735907438514f3b4b702680d74588 in lucene-solr's branch 
refs/heads/branch_6x from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6c63df0 ]

LUCENE-7708: Fix position length attribute set by the ShingleFilter when 
outputUnigrams=false


> Track PositionLengthAttribute abuse
> ---
>
> Key: LUCENE-7708
> URL: https://issues.apache.org/jira/browse/LUCENE-7708
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser, modules/analysis
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7708.patch, LUCENE-7708.patch
>
>
> Some token filters uses the position length attribute of the token stream to 
> encode the number of terms they put in a single token. 
> This breaks the query parsing because it creates disconnected graph. 
> I've tracked down the abusive case to 2 candidates:
> * ShingleFilter which sets the position length attribute to the length of the 
> shingle.
> * CJKBigramFilter which always sets the position length attribute to 2.
> I don't think these filters should set the position length at all so the best 
> would be to remove the attribute from these token filters but this could 
> break BWC.
> Though this is a serious bug since shingles and cjk bigram now produce 
> invalid queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7708) Track PositionLengthAttribute abuse

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883655#comment-15883655
 ] 

ASF subversion and git services commented on LUCENE-7708:
-

Commit 57a42e4ec54aebac40c1ef7dc93d933cd00dbe1e in lucene-solr's branch 
refs/heads/master from [~jim.ferenczi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=57a42e4 ]

LUCENE-7708: Fix position length attribute set by the ShingleFilter when 
outputUnigrams=false


> Track PositionLengthAttribute abuse
> ---
>
> Key: LUCENE-7708
> URL: https://issues.apache.org/jira/browse/LUCENE-7708
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser, modules/analysis
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7708.patch, LUCENE-7708.patch
>
>
> Some token filters uses the position length attribute of the token stream to 
> encode the number of terms they put in a single token. 
> This breaks the query parsing because it creates disconnected graph. 
> I've tracked down the abusive case to 2 candidates:
> * ShingleFilter which sets the position length attribute to the length of the 
> shingle.
> * CJKBigramFilter which always sets the position length attribute to 2.
> I don't think these filters should set the position length at all so the best 
> would be to remove the attribute from these token filters but this could 
> break BWC.
> Though this is a serious bug since shingles and cjk bigram now produce 
> invalid queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7710) BlockPackedReader to throw better exception

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883615#comment-15883615
 ] 

ASF subversion and git services commented on LUCENE-7710:
-

Commit e903f69ab31384b5af17e38e2257dca4bee5a673 in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e903f69 ]

LUCENE-7710: BlockPackedReader now throws CorruptIndexException if bitsPerValue 
is out of bounds, not generic IOException


> BlockPackedReader to throw better exception
> ---
>
> Key: LUCENE-7710
> URL: https://issues.apache.org/jira/browse/LUCENE-7710
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 4.10.3
>Reporter: Mike Drob
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7710.patch
>
>
> BlockPackedReader doesn't tell us which file we failed reading. Here's a 
> stack trace from a 4.10.3 install, but it applies to trunk as well.
> {noformat}
> org.apache.solr.common.SolrException; null:java.io.IOException: Corrupted
> at 
> org.apache.lucene.util.packed.BlockPackedReader.(BlockPackedReader.java:56)
> at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.loadNumeric(Lucene42DocValuesProducer.java:204)
> at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.getNumeric(Lucene42DocValuesProducer.java:174)
> at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.getNumeric(PerFieldDocValuesFormat.java:248)
> at 
> org.apache.lucene.index.SegmentCoreReaders.getNumericDocValues(SegmentCoreReaders.java:194)
> at 
> org.apache.lucene.index.SegmentReader.getNumericDocValues(SegmentReader.java:229)
> at org.apache.lucene.search.FieldCacheImpl.getLongs(FieldCacheImpl.java:883)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7710) BlockPackedReader to throw better exception

2017-02-24 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7710.

   Resolution: Fixed
Fix Version/s: 6.5
   master (7.0)

Thanks [~mdrob]!

> BlockPackedReader to throw better exception
> ---
>
> Key: LUCENE-7710
> URL: https://issues.apache.org/jira/browse/LUCENE-7710
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 4.10.3
>Reporter: Mike Drob
> Fix For: master (7.0), 6.5
>
> Attachments: LUCENE-7710.patch
>
>
> BlockPackedReader doesn't tell us which file we failed reading. Here's a 
> stack trace from a 4.10.3 install, but it applies to trunk as well.
> {noformat}
> org.apache.solr.common.SolrException; null:java.io.IOException: Corrupted
> at 
> org.apache.lucene.util.packed.BlockPackedReader.(BlockPackedReader.java:56)
> at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.loadNumeric(Lucene42DocValuesProducer.java:204)
> at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.getNumeric(Lucene42DocValuesProducer.java:174)
> at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.getNumeric(PerFieldDocValuesFormat.java:248)
> at 
> org.apache.lucene.index.SegmentCoreReaders.getNumericDocValues(SegmentCoreReaders.java:194)
> at 
> org.apache.lucene.index.SegmentReader.getNumericDocValues(SegmentReader.java:229)
> at org.apache.lucene.search.FieldCacheImpl.getLongs(FieldCacheImpl.java:883)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_121) - Build # 19042 - Unstable!

2017-02-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19042/
Java: 64bit/jdk1.8.0_121 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1198 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([ECE67AF9B44CDD8A]:0)
at 
org.apache.solr.core.CoreContainer.getNodeNameLocal(CoreContainer.java:625)
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:502)
at org.apache.solr.util.TestHarness.(TestHarness.java:177)
at org.apache.solr.util.TestHarness.(TestHarness.java:140)
at org.apache.solr.util.TestHarness.(TestHarness.java:146)
at org.apache.solr.util.TestHarness.(TestHarness.java:109)
at org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:742)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:732)
at org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:560)
at 
org.apache.solr.analysis.TestFoldingMultitermExtrasQuery.beforeTests(TestFoldingMultitermExtrasQuery.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analysis.TestFoldingMultitermExtrasQuery

Error Message:
ObjectTracker found 2 object(s) that were not released!!! [InternalHttpClient, 
InternalHttpClient] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:289)
  at 
org.apache.solr.update.UpdateShardHandler.(UpdateShardHandler.java:90)  
at org.apache.solr.core.CoreContainer.load(CoreContainer.java:490)  at 
org.apache.solr.util.TestHarness.(TestHarness.java:177)  at 
org.apache.solr.util.TestHarness.(TestHarness.java:140)  at 
org.apache.solr.util.TestHarness.(TestHarness.java:146)  at 
org.apache.solr.util.TestHarness.(TestHarness.java:109)  at 
org.apache.solr.SolrTestCaseJ4.createCore(SolrTestCaseJ4.java:742)  at 
org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:732)  at 
org.apache.solr.SolrTestCaseJ4.initCore(SolrTestCaseJ4.java:560)  at 
org.apache.solr.analysis.TestFoldingMultitermExtrasQuery.beforeTests(TestFoldingMultitermExtrasQuery.java:36)
  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  at 

[jira] [Commented] (LUCENE-7710) BlockPackedReader to throw better exception

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883614#comment-15883614
 ] 

ASF subversion and git services commented on LUCENE-7710:
-

Commit cab3aae11dd6e781acabf513095eb11606feddde in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cab3aae ]

LUCENE-7710: BlockPackedReader now throws CorruptIndexException if bitsPerValue 
is out of bounds, not generic IOException


> BlockPackedReader to throw better exception
> ---
>
> Key: LUCENE-7710
> URL: https://issues.apache.org/jira/browse/LUCENE-7710
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 4.10.3
>Reporter: Mike Drob
> Attachments: LUCENE-7710.patch
>
>
> BlockPackedReader doesn't tell us which file we failed reading. Here's a 
> stack trace from a 4.10.3 install, but it applies to trunk as well.
> {noformat}
> org.apache.solr.common.SolrException; null:java.io.IOException: Corrupted
> at 
> org.apache.lucene.util.packed.BlockPackedReader.(BlockPackedReader.java:56)
> at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.loadNumeric(Lucene42DocValuesProducer.java:204)
> at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.getNumeric(Lucene42DocValuesProducer.java:174)
> at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.getNumeric(PerFieldDocValuesFormat.java:248)
> at 
> org.apache.lucene.index.SegmentCoreReaders.getNumericDocValues(SegmentCoreReaders.java:194)
> at 
> org.apache.lucene.index.SegmentReader.getNumericDocValues(SegmentReader.java:229)
> at org.apache.lucene.search.FieldCacheImpl.getLongs(FieldCacheImpl.java:883)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7707) Only assign ScoreDoc#shardIndex if it was already assigned to non default (-1) value

2017-02-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883608#comment-15883608
 ] 

Michael McCandless commented on LUCENE-7707:


OK I committed that last patch.

> Only assign ScoreDoc#shardIndex if it was already assigned to non default 
> (-1) value
> 
>
> Key: LUCENE-7707
> URL: https://issues.apache.org/jira/browse/LUCENE-7707
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: master (7.0), 6.5.0
>
> Attachments: LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, 
> LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, 
> LUCENE-7707.patch
>
>
> When you use TopDocs.merge today it always overrides the ScoreDoc#shardIndex 
> value. The assumption that is made here is that all shard results are merges 
> at once which is not necessarily the case. If for instance incremental merge 
> phases are applied the shard index doesn't correspond to the index in the 
> outer TopDocs array. To make this a backwards compatible but yet 
> non-controversial change we could change the internals of TopDocs#merge to 
> only assign this value unless it's not been assigned before to a non-default 
> (-1) value to allow multiple or sparse top docs merging.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7707) Only assign ScoreDoc#shardIndex if it was already assigned to non default (-1) value

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883605#comment-15883605
 ] 

ASF subversion and git services commented on LUCENE-7707:
-

Commit 2e56c0e50564c8feeeb2831dd36cff1e9b23a00f in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2e56c0e ]

LUCENE-7707: add explicit boolean to TopDocs.merge to govern whether incoming 
or implicit shard index should be used


> Only assign ScoreDoc#shardIndex if it was already assigned to non default 
> (-1) value
> 
>
> Key: LUCENE-7707
> URL: https://issues.apache.org/jira/browse/LUCENE-7707
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: master (7.0), 6.5.0
>
> Attachments: LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, 
> LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, 
> LUCENE-7707.patch
>
>
> When you use TopDocs.merge today it always overrides the ScoreDoc#shardIndex 
> value. The assumption that is made here is that all shard results are merges 
> at once which is not necessarily the case. If for instance incremental merge 
> phases are applied the shard index doesn't correspond to the index in the 
> outer TopDocs array. To make this a backwards compatible but yet 
> non-controversial change we could change the internals of TopDocs#merge to 
> only assign this value unless it's not been assigned before to a non-default 
> (-1) value to allow multiple or sparse top docs merging.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7707) Only assign ScoreDoc#shardIndex if it was already assigned to non default (-1) value

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883599#comment-15883599
 ] 

ASF subversion and git services commented on LUCENE-7707:
-

Commit d00c5cae2b80941bbe71c091d42659e0c504b5ec in lucene-solr's branch 
refs/heads/branch_6x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d00c5ca ]

LUCENE-7707: add explicit boolean to TopDocs.merge to govern whether incoming 
or implicit shard index should be used


> Only assign ScoreDoc#shardIndex if it was already assigned to non default 
> (-1) value
> 
>
> Key: LUCENE-7707
> URL: https://issues.apache.org/jira/browse/LUCENE-7707
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: master (7.0), 6.5.0
>
> Attachments: LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, 
> LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, 
> LUCENE-7707.patch
>
>
> When you use TopDocs.merge today it always overrides the ScoreDoc#shardIndex 
> value. The assumption that is made here is that all shard results are merges 
> at once which is not necessarily the case. If for instance incremental merge 
> phases are applied the shard index doesn't correspond to the index in the 
> outer TopDocs array. To make this a backwards compatible but yet 
> non-controversial change we could change the internals of TopDocs#merge to 
> only assign this value unless it's not been assigned before to a non-default 
> (-1) value to allow multiple or sparse top docs merging.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7703) Record the version that was used at index creation time

2017-02-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883579#comment-15883579
 ] 

Michael McCandless commented on LUCENE-7703:


+1 to the patch.

> Record the version that was used at index creation time
> ---
>
> Key: LUCENE-7703
> URL: https://issues.apache.org/jira/browse/LUCENE-7703
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7703.patch
>
>
> SegmentInfos already records the version that was used to write a commit and 
> the version that was used to write the oldest segment in the index. In 
> addition to those, I think it could be useful to record the Lucene version 
> that was used to create the index. I think it could help with:
>  - Debugging: there are things that change based on Lucene versions, for 
> instance we will reject broken offsets in term vectors as of 7.0. Knowing the 
> version that was used to create the index can be very useful to know what 
> assumptions we can make about an index.
>  - Backward compatibility. The codec API helped simplify backward 
> compatibility of the index files a lot. However for everything that is done 
> on top of the codec API like analysis or the computation of length norm 
> factors, backward compatibility needs to be handled on top of Lucene. Maybe 
> we could simplify this?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7700) Move throughput control and merge aborting out of IndexWriter's core?

2017-02-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883563#comment-15883563
 ] 

Michael McCandless commented on LUCENE-7700:


Thanks [~dawid.weiss]; I'll have a look...

> Move throughput control and merge aborting out of IndexWriter's core?
> -
>
> Key: LUCENE-7700
> URL: https://issues.apache.org/jira/browse/LUCENE-7700
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Attachments: LUCENE-7700.patch, LUCENE-7700.patch
>
>
> Here is a bit of a background:
> - I wanted to implement a custom merging strategy that would have a custom 
> i/o flow control (global),
> - currently, the CMS is tightly bound with a few classes -- MergeRateLimiter, 
> OneMerge, IndexWriter.
> Looking at the code it seems to me that everything with respect to I/O 
> control could be nicely pulled out into classes that explicitly control the 
> merging process, that is only MergePolicy and MergeScheduler. By default, one 
> could even run without any additional I/O accounting overhead (which is 
> currently in there, even if one doesn't use the CMS's throughput control).
> Such refactoring would also give a chance to nicely move things where they 
> belong -- job aborting into OneMerge (currently in RateLimiter), rate limiter 
> lifecycle bound to OneMerge (MergeScheduler could then use per-merge or 
> global accounting, as it pleases).
> Just a thought and some initial refactorings for discussion.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7709) Remove unused backward-compatibility logic in codecs

2017-02-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883560#comment-15883560
 ] 

Michael McCandless commented on LUCENE-7709:


+1, thanks [~jpountz]

> Remove unused backward-compatibility logic in codecs
> 
>
> Key: LUCENE-7709
> URL: https://issues.apache.org/jira/browse/LUCENE-7709
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7709.patch
>
>
> Some of our index formats were used before 6.0 was released and accumulated 
> backward-compatibility logic that is not necessary anymore with Lucene 7.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7710) BlockPackedReader to throw better exception

2017-02-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883554#comment-15883554
 ] 

Michael McCandless commented on LUCENE-7710:


Why are you hitting so many crazy exceptions [~mdrob]!

Patch looks great; I'll commit soon.  Thanks [~mdrob]!

> BlockPackedReader to throw better exception
> ---
>
> Key: LUCENE-7710
> URL: https://issues.apache.org/jira/browse/LUCENE-7710
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 4.10.3
>Reporter: Mike Drob
> Attachments: LUCENE-7710.patch
>
>
> BlockPackedReader doesn't tell us which file we failed reading. Here's a 
> stack trace from a 4.10.3 install, but it applies to trunk as well.
> {noformat}
> org.apache.solr.common.SolrException; null:java.io.IOException: Corrupted
> at 
> org.apache.lucene.util.packed.BlockPackedReader.(BlockPackedReader.java:56)
> at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.loadNumeric(Lucene42DocValuesProducer.java:204)
> at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.getNumeric(Lucene42DocValuesProducer.java:174)
> at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.getNumeric(PerFieldDocValuesFormat.java:248)
> at 
> org.apache.lucene.index.SegmentCoreReaders.getNumericDocValues(SegmentCoreReaders.java:194)
> at 
> org.apache.lucene.index.SegmentReader.getNumericDocValues(SegmentReader.java:229)
> at org.apache.lucene.search.FieldCacheImpl.getLongs(FieldCacheImpl.java:883)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6203) cast exception while searching with sort function and result grouping

2017-02-24 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883546#comment-15883546
 ] 

Christine Poerschke commented on SOLR-6203:
---

Hi Judith,

Thanks for following up on this ticket!

I've tried to remind myself about where we had left things by applying your 
Dec5th patch to the 
[jira/solr-6203|https://github.com/apache/lucene-solr/tree/jira/solr-6203] 
working branch and then merging in the master branch changes including the 
SOLR-9890 piece. After resolving the merges "it compiles" but i haven't run any 
tests or anything.

>From your Dec 2nd comments:

bq. ... I got to wondering about the call to schema.getFileOrNull() in the new 
implWeightSortSpec() function from the SOLR-9660 patch. That function allows 
the dynamic '*' field to lay claim to schema fields which SortSpecParsing 
carefully protected from it, just as it does when called by the 
XXXResultTransformer functions we are gearing up to modify. ...

Merging the master branch into the working branch didn't help too much with 
jogging my memory, but this sounds like a very good lead.

_The schema.getFileOrNull() let's the dynamic '\*' field claim 
'sqrt(popularity) desc' (or is it only 'popularity'?) from the sort whereas in 
SortSpecParsing there's the [// short circuit test for a really simple field 
name ... // let's try it as a function 
instead|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/search/SortSpecParsing.java#L103-L108]
 logic which protects the sort from being misinterpreted as a '\*' dynamic 
field._

That is how I read your comment combined with the code, are we on the same page 
on this point? Assuming we are, and I think we are, then yes your 
weightSort/rewrite/createWeight and implWeightSortSpec observations and 
questions make sense to me.

So, following in that direction, the 'solution under consideration' is then:
* pass SortSpec rather than just Sort around (much of that already done via 
linked preparatory refactors)
* have the ShardResultTransformer classes use the SortSpec's SchemaField 
objects because
** the Sort's SortField requires use of IndexSchema.getFieldOrNull for 
conversion into a SchemaField object
** IndexSchema.getFieldOrNull does not protect the sort function from being 
misinterpreted as a '*' dynamic field
** the SortSpec's SchemaField objects require no conversion
** the SortSpec's SchemaField objects originally came from SortSpecParsing 
which afforded protection from the '*' dynamic field

Does that summary make sense so far? Assuming it does, then on to the 
'complication and remaining questions':
* QueryComponent didn't just use the Sort 'as it is' but it does this (not 
fully understood by us) 'weighting of sort' thing
* 'weighting of sort' is a SolrIndexSearcher 
[thing|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#L1001-L1004]
* as part of the SOLR-9660 refactor QueryComponent doing 'weighting of sort' 
became 'weighting of sort spec' which is still a SolrIndexSearcher 
[thing|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java#L1006-L1030]
* 'weighting of sort spec' needs to consider the SchemaField objects in the 
sort spec
** question 1: is it fair to assume that the SchemaField objects of the 
original and the rewritten/weighted sort match?
** question 2: if rewriting/weighting turns the original sort into a null sort, 
what should the SchemaField objects be for the rewritten/weighted sort spec?

Right, okay, that was a lot of writing, thank you for reading this far :-)

Does it sort of (no pun intended) help sum up where we think we are here?

If it does then the next step (not for today) will be to find answers to the 
two open questions and decide on how to proceed with the patch/working branch.

Have a nice weekend!

Christine

> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Assignee: Christine Poerschke
> Attachments: README, SOLR-6203.patch, SOLR-6203.patch, 
> SOLR-6203.patch, SOLR-6203.patch, SOLR-6203.patch, SOLR-6203.patch, 
> SOLR-6203-unittest.patch, SOLR-6203-unittest.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 

[jira] [Commented] (LUCENE-7708) Track PositionLengthAttribute abuse

2017-02-24 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883528#comment-15883528
 ] 

Steve Rowe commented on LUCENE-7708:


+1, LGTM, all {{lucene/analysis/common/}} tests pass for me with the latest 
patch.

Also, 1000 beasting iterations of TestRandomChains didn't trigger any failures 
with this patch (other than the unrelated one at LUCENE-7711).

> Track PositionLengthAttribute abuse
> ---
>
> Key: LUCENE-7708
> URL: https://issues.apache.org/jira/browse/LUCENE-7708
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser, modules/analysis
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7708.patch, LUCENE-7708.patch
>
>
> Some token filters uses the position length attribute of the token stream to 
> encode the number of terms they put in a single token. 
> This breaks the query parsing because it creates disconnected graph. 
> I've tracked down the abusive case to 2 candidates:
> * ShingleFilter which sets the position length attribute to the length of the 
> shingle.
> * CJKBigramFilter which always sets the position length attribute to 2.
> I don't think these filters should set the position length at all so the best 
> would be to remove the attribute from these token filters but this could 
> break BWC.
> Though this is a serious bug since shingles and cjk bigram now produce 
> invalid queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7708) Track PositionLengthAttribute abuse

2017-02-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883526#comment-15883526
 ] 

Michael McCandless commented on LUCENE-7708:


+1, thanks [~jim.ferenczi]!

> Track PositionLengthAttribute abuse
> ---
>
> Key: LUCENE-7708
> URL: https://issues.apache.org/jira/browse/LUCENE-7708
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser, modules/analysis
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7708.patch, LUCENE-7708.patch
>
>
> Some token filters uses the position length attribute of the token stream to 
> encode the number of terms they put in a single token. 
> This breaks the query parsing because it creates disconnected graph. 
> I've tracked down the abusive case to 2 candidates:
> * ShingleFilter which sets the position length attribute to the length of the 
> shingle.
> * CJKBigramFilter which always sets the position length attribute to 2.
> I don't think these filters should set the position length at all so the best 
> would be to remove the attribute from these token filters but this could 
> break BWC.
> Though this is a serious bug since shingles and cjk bigram now produce 
> invalid queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9887) Add KeepWordFilter, StemmerOverrideFilter, StopFilterFactory, SynonymFilter that reads data from a JDBC source

2017-02-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883440#comment-15883440
 ] 

Torsten Bøgh Köster commented on SOLR-9887:
---

As a co-author of the said project, I'm happy to see that a discussion has 
started. We're currently implementing another search based on Solr where we're 
heavily making use of a lot of huge synonym lists (e.g. for german stemming). 
The problem is that the only out of the box way to use large synonym files with 
Solr is to package them as JAR and supply them in the classpath or the external 
libs folder. 

As Jan said, Zookeeper would be an ideal storage but is limited to 1mb and you 
do not want to mess around with that. I like the idea Alexandre that Solr 
should maintain resources in a push fashion and act as a pure data store. Is 
there a way that we push large synonym files into the system collection (that 
would be my Option 3 ;-)?

In the current project jdbc storage is not the preferred way of handling data. 
So we're maybe going to extend the project to another NoSQL datastore - or even 
the system collection as mentioned above. The main implementation idea of the 
solr-jdbc project is to swap the ResourceLoader with a datastore dependend one 
[1]. I'll check if we could design this more interchangeable for future use of 
other data stores or the native system collection.

In regards of updating, the solr-jdbc project is pulling updated synonym 
definitions upon Searcher construction, so there is no in-between Searcher 
synonym reloading - but it would be certainly be a nice to have feature.

[1] 
https://github.com/shopping24/solr-jdbc/blob/master/src/main/java/com/s24/search/solr/analysis/jdbc/JdbcResourceLoader.java

> Add KeepWordFilter, StemmerOverrideFilter, StopFilterFactory, SynonymFilter 
> that reads data from a JDBC source
> --
>
> Key: SOLR-9887
> URL: https://issues.apache.org/jira/browse/SOLR-9887
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tobias Kässmann
>Priority: Minor
>
> We've created some new {{FilterFactories}} that reads their stopwords or 
> synonyms from a database (by a JDBC source). That enables us a easy 
> management of large lists and also add the possibility to do this in other 
> tools. JDBC data sources are retrieved via JNDI.
> For a easy reload of this lists we've added a {{SeacherAwareReloader}} 
> abstraciton that reloads this lists on every new searcher event.
> If this is a feature that is interesting for Solr, we will create a pull 
> request. All the sources are currently available here: 
> https://github.com/shopping24/solr-jdbc



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7707) Only assign ScoreDoc#shardIndex if it was already assigned to non default (-1) value

2017-02-24 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7707:
---
Attachment: LUCENE-7707.patch

Thanks [~jpountz]; I added checking for that abuse, and the original test case 
for this...

> Only assign ScoreDoc#shardIndex if it was already assigned to non default 
> (-1) value
> 
>
> Key: LUCENE-7707
> URL: https://issues.apache.org/jira/browse/LUCENE-7707
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: master (7.0), 6.5.0
>
> Attachments: LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, 
> LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, 
> LUCENE-7707.patch
>
>
> When you use TopDocs.merge today it always overrides the ScoreDoc#shardIndex 
> value. The assumption that is made here is that all shard results are merges 
> at once which is not necessarily the case. If for instance incremental merge 
> phases are applied the shard index doesn't correspond to the index in the 
> outer TopDocs array. To make this a backwards compatible but yet 
> non-controversial change we could change the internals of TopDocs#merge to 
> only assign this value unless it's not been assigned before to a non-default 
> (-1) value to allow multiple or sparse top docs merging.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7708) Track PositionLengthAttribute abuse

2017-02-24 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883363#comment-15883363
 ] 

Steve Rowe edited comment on LUCENE-7708 at 2/24/17 7:46 PM:
-

I'm beasting 1000 iterations of TestRandomChains with the patch, and run 110 
found the following reproducing seed - maybe it's ShingleFilter's fault?  (I 
didn't investigate further):

*edit*: this seed fails on unpatched master, so the patch on this issue isn't 
to blame.  I created a separate issue: LUCENE-7711

{noformat}
  [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2> TEST FAIL: useCharFilter=false text='\ufac4\u0552H 
\ua954\ua944 \ud0d2\uaddd\ub6cb\uc388\uc344\uca88\ud224\uc462\uaf42 g '
   [junit4]   2> Exception from random analyzer: 
   [junit4]   2> charfilters=
   [junit4]   2>   
org.apache.lucene.analysis.charfilter.HTMLStripCharFilter(java.io.StringReader@3fb9d00e,
 [, , , ])
   [junit4]   2> tokenizer=
   [junit4]   2>   
org.apache.lucene.analysis.standard.StandardTokenizer(org.apache.lucene.util.AttributeFactory$1@c893af9b)
   [junit4]   2> filters=
   [junit4]   2>   
org.apache.lucene.analysis.miscellaneous.KeywordRepeatFilter(ValidatingTokenFilter@7e1e9fe2
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2>   
org.apache.lucene.analysis.cjk.CJKBigramFilter(ValidatingTokenFilter@12c3fb1b 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2>   
org.apache.lucene.analysis.shingle.ShingleFilter(ValidatingTokenFilter@31c463b5 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false,
 49)
   [junit4]   2>   
org.apache.lucene.analysis.in.IndicNormalizationFilter(ValidatingTokenFilter@3f72787
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2> offsetsAreCorrect=false
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
-Dtests.method=testRandomChains -Dtests.seed=E532502212098AC7 -Dtests.slow=true 
-Dtests.locale=ko-KR -Dtests.timezone=Atlantic/Jan_Mayen -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.76s | TestRandomChains.testRandomChains <<<
   [junit4]> Throwable #1: java.lang.IllegalArgumentException: startOffset 
must be non-negative, and endOffset must be >= startOffset; got 
startOffset=10,endOffset=9
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([E532502212098AC7:D8D37943551B9707]:0)
   [junit4]>at 
org.apache.lucene.analysis.tokenattributes.PackedTokenAttributeImpl.setOffset(PackedTokenAttributeImpl.java:110)
   [junit4]>at 
org.apache.lucene.analysis.shingle.ShingleFilter.incrementToken(ShingleFilter.java:345)
   [junit4]>at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
   [junit4]>at 
org.apache.lucene.analysis.in.IndicNormalizationFilter.incrementToken(IndicNormalizationFilter.java:40)
   [junit4]>at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:731)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:642)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:540)
   [junit4]>at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:853)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4] OK  1.64s | TestRandomChains.testRandomChainsWithLargeStrings
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{dummy=PostingsFormat(name=LuceneVarGapFixedInterval)}, docValues:{}, 
maxPointsInLeafNode=542, maxMBSortInHeap=7.773738401752009, 
sim=RandomSimilarity(queryNorm=false): {}, locale=ko-KR, 
timezone=Atlantic/Jan_Mayen
   [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_77 (64-bit)/cpus=16,threads=1,free=400845920,total=514850816
   [junit4]   2> NOTE: All tests run in this JVM: [TestRandomChains]
   [junit4] Completed [1/1 (1!)] in 6.03s, 2 tests, 1 error <<< FAILURES!
{noformat}


was (Author: steve_rowe):
I'm beasting 1000 iterations of TestRandomChains with the patch, and run 110 
found the following reproducing seed - maybe it's ShingleFilter's fault?  (I 
didn't investigate further):

*edit*: this seed fails on unpatched master, so the patch on this issue isn't 
to blame.  I'll create a different issue.

{noformat}
  

[jira] [Created] (LUCENE-7711) TestRandomChains.testRandomChains() failure: got startOffset=10,endOffset=9

2017-02-24 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-7711:
--

 Summary: TestRandomChains.testRandomChains() failure: got 
startOffset=10,endOffset=9
 Key: LUCENE-7711
 URL: https://issues.apache.org/jira/browse/LUCENE-7711
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Steve Rowe


Found while beasting TestRandomChains for LUCENE-7708 (note though that the 
failure below reproduces on a clean master checkout):

{noformat}
   [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2> TEST FAIL: useCharFilter=false text='\ufac4\u0552H 
\ua954\ua944 \ud0d2\uaddd\ub6cb\uc388\uc344\uca88\ud224\uc462\uaf42 g '
   [junit4]   2> Exception from random analyzer: 
   [junit4]   2> charfilters=
   [junit4]   2>   
org.apache.lucene.analysis.charfilter.HTMLStripCharFilter(java.io.StringReader@72c69dd0,
 [, , , ])
   [junit4]   2> tokenizer=
   [junit4]   2>   
org.apache.lucene.analysis.standard.StandardTokenizer(org.apache.lucene.util.AttributeFactory$1@2ff87e59)
   [junit4]   2> filters=
   [junit4]   2>   
org.apache.lucene.analysis.miscellaneous.KeywordRepeatFilter(ValidatingTokenFilter@2c2ac1cc
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2>   
org.apache.lucene.analysis.cjk.CJKBigramFilter(ValidatingTokenFilter@51f2f8f0 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2>   
org.apache.lucene.analysis.shingle.ShingleFilter(ValidatingTokenFilter@2ea3d3ba 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false,
 49)
   [junit4]   2>   
org.apache.lucene.analysis.in.IndicNormalizationFilter(ValidatingTokenFilter@25d58ed7
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2> offsetsAreCorrect=false
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
-Dtests.method=testRandomChains -Dtests.seed=E532502212098AC7 -Dtests.slow=true 
-Dtests.locale=ko-KR -Dtests.timezone=Atlantic/Jan_Mayen -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.23s | TestRandomChains.testRandomChains <<<
   [junit4]> Throwable #1: java.lang.IllegalArgumentException: startOffset 
must be non-negative, and endOffset must be >= startOffset; got 
startOffset=10,endOffset=9
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([E532502212098AC7:D8D37943551B9707]:0)
   [junit4]>at 
org.apache.lucene.analysis.tokenattributes.PackedTokenAttributeImpl.setOffset(PackedTokenAttributeImpl.java:110)
   [junit4]>at 
org.apache.lucene.analysis.shingle.ShingleFilter.incrementToken(ShingleFilter.java:345)
   [junit4]>at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
   [junit4]>at 
org.apache.lucene.analysis.in.IndicNormalizationFilter.incrementToken(IndicNormalizationFilter.java:40)
   [junit4]>at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:731)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:642)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:540)
   [junit4]>at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:853)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{dummy=PostingsFormat(name=LuceneVarGapFixedInterval)}, docValues:{}, 
maxPointsInLeafNode=542, maxMBSortInHeap=7.773738401752009, 
sim=RandomSimilarity(queryNorm=false): {}, locale=ko-KR, 
timezone=Atlantic/Jan_Mayen
   [junit4]   2> NOTE: Mac OS X 10.12.3 x86_64/Oracle Corporation 1.8.0_112 
(64-bit)/cpus=8,threads=1,free=225448184,total=257425408
   [junit4]   2> NOTE: All tests run in this JVM: [TestRandomChains]
   [junit4] Completed [1/1 (1!)] in 1.92s, 1 test, 1 error <<< FAILURES!
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7708) Track PositionLengthAttribute abuse

2017-02-24 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883363#comment-15883363
 ] 

Steve Rowe edited comment on LUCENE-7708 at 2/24/17 7:36 PM:
-

I'm beasting 1000 iterations of TestRandomChains with the patch, and run 110 
found the following reproducing seed - maybe it's ShingleFilter's fault?  (I 
didn't investigate further):

*edit*: this seed fails on unpatched master, so the patch on this issue isn't 
to blame.  I'll create a different issue.

{noformat}
  [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2> TEST FAIL: useCharFilter=false text='\ufac4\u0552H 
\ua954\ua944 \ud0d2\uaddd\ub6cb\uc388\uc344\uca88\ud224\uc462\uaf42 g '
   [junit4]   2> Exception from random analyzer: 
   [junit4]   2> charfilters=
   [junit4]   2>   
org.apache.lucene.analysis.charfilter.HTMLStripCharFilter(java.io.StringReader@3fb9d00e,
 [, , , ])
   [junit4]   2> tokenizer=
   [junit4]   2>   
org.apache.lucene.analysis.standard.StandardTokenizer(org.apache.lucene.util.AttributeFactory$1@c893af9b)
   [junit4]   2> filters=
   [junit4]   2>   
org.apache.lucene.analysis.miscellaneous.KeywordRepeatFilter(ValidatingTokenFilter@7e1e9fe2
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2>   
org.apache.lucene.analysis.cjk.CJKBigramFilter(ValidatingTokenFilter@12c3fb1b 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2>   
org.apache.lucene.analysis.shingle.ShingleFilter(ValidatingTokenFilter@31c463b5 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false,
 49)
   [junit4]   2>   
org.apache.lucene.analysis.in.IndicNormalizationFilter(ValidatingTokenFilter@3f72787
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2> offsetsAreCorrect=false
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
-Dtests.method=testRandomChains -Dtests.seed=E532502212098AC7 -Dtests.slow=true 
-Dtests.locale=ko-KR -Dtests.timezone=Atlantic/Jan_Mayen -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.76s | TestRandomChains.testRandomChains <<<
   [junit4]> Throwable #1: java.lang.IllegalArgumentException: startOffset 
must be non-negative, and endOffset must be >= startOffset; got 
startOffset=10,endOffset=9
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([E532502212098AC7:D8D37943551B9707]:0)
   [junit4]>at 
org.apache.lucene.analysis.tokenattributes.PackedTokenAttributeImpl.setOffset(PackedTokenAttributeImpl.java:110)
   [junit4]>at 
org.apache.lucene.analysis.shingle.ShingleFilter.incrementToken(ShingleFilter.java:345)
   [junit4]>at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
   [junit4]>at 
org.apache.lucene.analysis.in.IndicNormalizationFilter.incrementToken(IndicNormalizationFilter.java:40)
   [junit4]>at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:731)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:642)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:540)
   [junit4]>at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:853)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4] OK  1.64s | TestRandomChains.testRandomChainsWithLargeStrings
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{dummy=PostingsFormat(name=LuceneVarGapFixedInterval)}, docValues:{}, 
maxPointsInLeafNode=542, maxMBSortInHeap=7.773738401752009, 
sim=RandomSimilarity(queryNorm=false): {}, locale=ko-KR, 
timezone=Atlantic/Jan_Mayen
   [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_77 (64-bit)/cpus=16,threads=1,free=400845920,total=514850816
   [junit4]   2> NOTE: All tests run in this JVM: [TestRandomChains]
   [junit4] Completed [1/1 (1!)] in 6.03s, 2 tests, 1 error <<< FAILURES!
{noformat}


was (Author: steve_rowe):
I'm beasting 1000 iterations of TestRandomChains with the patch, and run 110 
found the following reproducing seed - maybe it's ShingleFilter's fault?  (I 
didn't investigate further):

{noformat}
  [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2> TEST FAIL: useCharFilter=false text='\ufac4\u0552H 

Re: [JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1153 - Unstable!

2017-02-24 Thread Michael McCandless
This is https://issues.apache.org/jira/browse/LUCENE-7707 ... I'm iterating
on a fix.

Mike McCandless

http://blog.mikemccandless.com

On Fri, Feb 24, 2017 at 9:33 AM, Policeman Jenkins Server <
jenk...@thetaphi.de> wrote:

> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1153/
> Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC
>
> 1 tests failed.
> FAILED:  org.apache.lucene.search.TestShardSearching.testSimple
>
> Error Message:
> wrong hit docID expected:<22> but was:<13>
>
> Stack Trace:
> java.lang.AssertionError: wrong hit docID expected:<22> but was:<13>
> at __randomizedtesting.SeedInfo.seed([D90B92C4914AB956:
> E1B8B63AB6B96D87]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.failNotEquals(Assert.java:647)
> at org.junit.Assert.assertEquals(Assert.java:128)
> at org.junit.Assert.assertEquals(Assert.java:472)
> at org.apache.lucene.util.TestUtil.assertEquals(
> TestUtil.java:1050)
> at org.apache.lucene.search.TestShardSearching.assertSame(
> TestShardSearching.java:387)
> at org.apache.lucene.search.TestShardSearching.testSimple(
> TestShardSearching.java:236)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(
> RandomizedRunner.java:1713)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(
> RandomizedRunner.java:907)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(
> RandomizedRunner.java:943)
> at com.carrotsearch.randomizedtesting.
> RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
> at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(
> TestRuleSetupTeardownChained.java:49)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:45)
> at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(
> TestRuleThreadAndTestName.java:48)
> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures
> $1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at org.apache.lucene.util.TestRuleMarkFailure$1.
> evaluate(TestRuleMarkFailure.java:47)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl$
> StatementRunner.run(ThreadLeakControl.java:368)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl.
> forkTimeoutingTask(ThreadLeakControl.java:817)
> at com.carrotsearch.randomizedtesting.
> ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.
> runSingleTest(RandomizedRunner.java:916)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(
> RandomizedRunner.java:802)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(
> RandomizedRunner.java:852)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(
> RandomizedRunner.java:863)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:45)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(
> TestRuleStoreClassName.java:41)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1.evaluate(
> NoShadowingOrOverridesOnMethodsRule.java:40)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1.evaluate(
> NoShadowingOrOverridesOnMethodsRule.java:40)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(
> TestRuleAssertionsRequired.java:53)
> at org.apache.lucene.util.TestRuleMarkFailure$1.
> evaluate(TestRuleMarkFailure.java:47)
> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures
> $1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(
> TestRuleIgnoreTestSuites.java:54)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl$
> 

[jira] [Commented] (LUCENE-7708) Track PositionLengthAttribute abuse

2017-02-24 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883363#comment-15883363
 ] 

Steve Rowe commented on LUCENE-7708:


I'm beasting 1000 iterations of TestRandomChains with the patch, and run 110 
found the following reproducing seed - maybe it's SingleFilter's fault?  (I 
didn't investigate further):

{noformat}
  [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2> TEST FAIL: useCharFilter=false text='\ufac4\u0552H 
\ua954\ua944 \ud0d2\uaddd\ub6cb\uc388\uc344\uca88\ud224\uc462\uaf42 g '
   [junit4]   2> Exception from random analyzer: 
   [junit4]   2> charfilters=
   [junit4]   2>   
org.apache.lucene.analysis.charfilter.HTMLStripCharFilter(java.io.StringReader@3fb9d00e,
 [, , , ])
   [junit4]   2> tokenizer=
   [junit4]   2>   
org.apache.lucene.analysis.standard.StandardTokenizer(org.apache.lucene.util.AttributeFactory$1@c893af9b)
   [junit4]   2> filters=
   [junit4]   2>   
org.apache.lucene.analysis.miscellaneous.KeywordRepeatFilter(ValidatingTokenFilter@7e1e9fe2
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2>   
org.apache.lucene.analysis.cjk.CJKBigramFilter(ValidatingTokenFilter@12c3fb1b 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2>   
org.apache.lucene.analysis.shingle.ShingleFilter(ValidatingTokenFilter@31c463b5 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false,
 49)
   [junit4]   2>   
org.apache.lucene.analysis.in.IndicNormalizationFilter(ValidatingTokenFilter@3f72787
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2> offsetsAreCorrect=false
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
-Dtests.method=testRandomChains -Dtests.seed=E532502212098AC7 -Dtests.slow=true 
-Dtests.locale=ko-KR -Dtests.timezone=Atlantic/Jan_Mayen -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.76s | TestRandomChains.testRandomChains <<<
   [junit4]> Throwable #1: java.lang.IllegalArgumentException: startOffset 
must be non-negative, and endOffset must be >= startOffset; got 
startOffset=10,endOffset=9
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([E532502212098AC7:D8D37943551B9707]:0)
   [junit4]>at 
org.apache.lucene.analysis.tokenattributes.PackedTokenAttributeImpl.setOffset(PackedTokenAttributeImpl.java:110)
   [junit4]>at 
org.apache.lucene.analysis.shingle.ShingleFilter.incrementToken(ShingleFilter.java:345)
   [junit4]>at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
   [junit4]>at 
org.apache.lucene.analysis.in.IndicNormalizationFilter.incrementToken(IndicNormalizationFilter.java:40)
   [junit4]>at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:731)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:642)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:540)
   [junit4]>at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:853)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4] OK  1.64s | TestRandomChains.testRandomChainsWithLargeStrings
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{dummy=PostingsFormat(name=LuceneVarGapFixedInterval)}, docValues:{}, 
maxPointsInLeafNode=542, maxMBSortInHeap=7.773738401752009, 
sim=RandomSimilarity(queryNorm=false): {}, locale=ko-KR, 
timezone=Atlantic/Jan_Mayen
   [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_77 (64-bit)/cpus=16,threads=1,free=400845920,total=514850816
   [junit4]   2> NOTE: All tests run in this JVM: [TestRandomChains]
   [junit4] Completed [1/1 (1!)] in 6.03s, 2 tests, 1 error <<< FAILURES!
{noformat}

> Track PositionLengthAttribute abuse
> ---
>
> Key: LUCENE-7708
> URL: https://issues.apache.org/jira/browse/LUCENE-7708
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser, modules/analysis
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7708.patch, LUCENE-7708.patch
>
>
> Some token filters uses the position length attribute of the token stream to 
> encode the number of terms they put in a 

[jira] [Comment Edited] (LUCENE-7708) Track PositionLengthAttribute abuse

2017-02-24 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883363#comment-15883363
 ] 

Steve Rowe edited comment on LUCENE-7708 at 2/24/17 7:17 PM:
-

I'm beasting 1000 iterations of TestRandomChains with the patch, and run 110 
found the following reproducing seed - maybe it's ShingleFilter's fault?  (I 
didn't investigate further):

{noformat}
  [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2> TEST FAIL: useCharFilter=false text='\ufac4\u0552H 
\ua954\ua944 \ud0d2\uaddd\ub6cb\uc388\uc344\uca88\ud224\uc462\uaf42 g '
   [junit4]   2> Exception from random analyzer: 
   [junit4]   2> charfilters=
   [junit4]   2>   
org.apache.lucene.analysis.charfilter.HTMLStripCharFilter(java.io.StringReader@3fb9d00e,
 [, , , ])
   [junit4]   2> tokenizer=
   [junit4]   2>   
org.apache.lucene.analysis.standard.StandardTokenizer(org.apache.lucene.util.AttributeFactory$1@c893af9b)
   [junit4]   2> filters=
   [junit4]   2>   
org.apache.lucene.analysis.miscellaneous.KeywordRepeatFilter(ValidatingTokenFilter@7e1e9fe2
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2>   
org.apache.lucene.analysis.cjk.CJKBigramFilter(ValidatingTokenFilter@12c3fb1b 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2>   
org.apache.lucene.analysis.shingle.ShingleFilter(ValidatingTokenFilter@31c463b5 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false,
 49)
   [junit4]   2>   
org.apache.lucene.analysis.in.IndicNormalizationFilter(ValidatingTokenFilter@3f72787
 
term=,bytes=[],startOffset=0,endOffset=0,positionIncrement=1,positionLength=1,type=word,flags=0,payload=null,keyword=false)
   [junit4]   2> offsetsAreCorrect=false
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRandomChains 
-Dtests.method=testRandomChains -Dtests.seed=E532502212098AC7 -Dtests.slow=true 
-Dtests.locale=ko-KR -Dtests.timezone=Atlantic/Jan_Mayen -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.76s | TestRandomChains.testRandomChains <<<
   [junit4]> Throwable #1: java.lang.IllegalArgumentException: startOffset 
must be non-negative, and endOffset must be >= startOffset; got 
startOffset=10,endOffset=9
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([E532502212098AC7:D8D37943551B9707]:0)
   [junit4]>at 
org.apache.lucene.analysis.tokenattributes.PackedTokenAttributeImpl.setOffset(PackedTokenAttributeImpl.java:110)
   [junit4]>at 
org.apache.lucene.analysis.shingle.ShingleFilter.incrementToken(ShingleFilter.java:345)
   [junit4]>at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
   [junit4]>at 
org.apache.lucene.analysis.in.IndicNormalizationFilter.incrementToken(IndicNormalizationFilter.java:40)
   [junit4]>at 
org.apache.lucene.analysis.ValidatingTokenFilter.incrementToken(ValidatingTokenFilter.java:67)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:731)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:642)
   [junit4]>at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:540)
   [junit4]>at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:853)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4] OK  1.64s | TestRandomChains.testRandomChainsWithLargeStrings
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{dummy=PostingsFormat(name=LuceneVarGapFixedInterval)}, docValues:{}, 
maxPointsInLeafNode=542, maxMBSortInHeap=7.773738401752009, 
sim=RandomSimilarity(queryNorm=false): {}, locale=ko-KR, 
timezone=Atlantic/Jan_Mayen
   [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_77 (64-bit)/cpus=16,threads=1,free=400845920,total=514850816
   [junit4]   2> NOTE: All tests run in this JVM: [TestRandomChains]
   [junit4] Completed [1/1 (1!)] in 6.03s, 2 tests, 1 error <<< FAILURES!
{noformat}


was (Author: steve_rowe):
I'm beasting 1000 iterations of TestRandomChains with the patch, and run 110 
found the following reproducing seed - maybe it's SingleFilter's fault?  (I 
didn't investigate further):

{noformat}
  [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2> TEST FAIL: useCharFilter=false text='\ufac4\u0552H 
\ua954\ua944 \ud0d2\uaddd\ub6cb\uc388\uc344\uca88\ud224\uc462\uaf42 g '
   [junit4]   2> Exception from random analyzer: 
   

RE: lucene-solr:master: SOLR-9640: Support PKI authentication and SSL in standalone-mode master/slave auth with local security.json

2017-02-24 Thread Uwe Schindler
I have the feeling this broke Jenkins. Millions of NPEs with JDK 8u121:

https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19042/console

130 test failures by NPE in 
org.apache.solr.core.CoreContainer.getNodeNameLocal()

-
Uwe Schindler
Achterdiek 19, D-28357 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: jan...@apache.org [mailto:jan...@apache.org]
> Sent: Friday, February 24, 2017 2:31 PM
> To: comm...@lucene.apache.org
> Subject: lucene-solr:master: SOLR-9640: Support PKI authentication and SSL
> in standalone-mode master/slave auth with local security.json
> 
> Repository: lucene-solr
> Updated Branches:
>   refs/heads/master 5eeb8136f -> 95d6fc251
> 
> 
> SOLR-9640: Support PKI authentication and SSL in standalone-mode
> master/slave auth with local security.json
> 
> 
> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> Commit: http://git-wip-us.apache.org/repos/asf/lucene-
> solr/commit/95d6fc25
> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/95d6fc25
> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/95d6fc25
> 
> Branch: refs/heads/master
> Commit: 95d6fc2512d6525b2354165553f0d6cc4d0d6310
> Parents: 5eeb813
> Author: Jan HC8ydahl 
> Authored: Fri Feb 24 14:26:48 2017 +0100
> Committer: Jan HC8ydahl 
> Committed: Fri Feb 24 14:30:42 2017 +0100
> 
> --
>  solr/CHANGES.txt|   2 +
>  .../org/apache/solr/core/CoreContainer.java |   9 +-
>  .../solr/security/PKIAuthenticationPlugin.java  |  42 +-
>  .../org/apache/solr/servlet/HttpSolrCall.java   |   4 +-
>  .../apache/solr/servlet/SolrDispatchFilter.java |  11 +-
>  .../solr/security/BasicAuthDistributedTest.java | 136 +++
>  .../security/TestPKIAuthenticationPlugin.java   |  38 +-
>  .../solr/BaseDistributedSearchTestCase.java |  37 -
>  8 files changed, 260 insertions(+), 19 deletions(-)
> --
> 
> 
> http://git-wip-us.apache.org/repos/asf/lucene-
> solr/blob/95d6fc25/solr/CHANGES.txt
> --
> diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
> index 0302615..2c5f0db 100644
> --- a/solr/CHANGES.txt
> +++ b/solr/CHANGES.txt
> @@ -134,6 +134,8 @@ New Features
>field must both be stored=false, indexed=false, docValues=true. (Ishan
> Chattopadhyaya, hossman, noble,
>shalin, yonik)
> 
> +* SOLR-9640: Support PKI authentication and SSL in standalone-mode
> master/slave auth with local security.json (janhoy)
> +
>  Bug Fixes
>  --
> 
> 
> http://git-wip-us.apache.org/repos/asf/lucene-
> solr/blob/95d6fc25/solr/core/src/java/org/apache/solr/core/CoreContainer.
> java
> --
> diff --git a/solr/core/src/java/org/apache/solr/core/CoreContainer.java
> b/solr/core/src/java/org/apache/solr/core/CoreContainer.java
> index e3977d7..6115562 100644
> --- a/solr/core/src/java/org/apache/solr/core/CoreContainer.java
> +++ b/solr/core/src/java/org/apache/solr/core/CoreContainer.java
> @@ -497,7 +497,9 @@ public class CoreContainer {
>  hostName = cfg.getNodeName();
> 
>  zkSys.initZooKeeper(this, solrHome, cfg.getCloudConfig());
> -if(isZooKeeperAware())  pkiAuthenticationPlugin = new
> PKIAuthenticationPlugin(this, zkSys.getZkController().getNodeName());
> +pkiAuthenticationPlugin = isZooKeeperAware() ?
> +new PKIAuthenticationPlugin(this,
> zkSys.getZkController().getNodeName()) :
> +new PKIAuthenticationPlugin(this, getNodeNameLocal());
> 
>  MDCLoggingContext.setNode(this);
> 
> @@ -618,6 +620,11 @@ public class CoreContainer {
>  }
>}
> 
> +  // Builds a node name to be used with PKIAuth.
> +  private String getNodeNameLocal() {
> +return
> getConfig().getCloudConfig().getHost()+":"+getConfig().getCloudConfig().getS
> olrHostPort()+"_solr";
> +  }
> +
>public void securityNodeChanged() {
>  log.info("Security node changed, reloading security.json");
>  reloadSecurityProperties();
> 
> http://git-wip-us.apache.org/repos/asf/lucene-
> solr/blob/95d6fc25/solr/core/src/java/org/apache/solr/security/PKIAuthenti
> cationPlugin.java
> --
> diff --git
> a/solr/core/src/java/org/apache/solr/security/PKIAuthenticationPlugin.java
> b/solr/core/src/java/org/apache/solr/security/PKIAuthenticationPlugin.java
> index fdd4408..d185bc9 100644
> ---
> a/solr/core/src/java/org/apache/solr/security/PKIAuthenticationPlugin.java
> +++
> b/solr/core/src/java/org/apache/solr/security/PKIAuthenticationPlugin.java
> @@ -22,7 +22,9 @@ import javax.servlet.ServletResponse;
>  import javax.servlet.http.HttpServletRequest;
>  import 

[jira] [Updated] (LUCENE-7708) Track PositionLengthAttribute abuse

2017-02-24 Thread Jim Ferenczi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi updated LUCENE-7708:
-
Attachment: LUCENE-7708.patch

Thanks Steve !
I pushed a new patch that solves the tests failures.

> Track PositionLengthAttribute abuse
> ---
>
> Key: LUCENE-7708
> URL: https://issues.apache.org/jira/browse/LUCENE-7708
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser, modules/analysis
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7708.patch, LUCENE-7708.patch
>
>
> Some token filters uses the position length attribute of the token stream to 
> encode the number of terms they put in a single token. 
> This breaks the query parsing because it creates disconnected graph. 
> I've tracked down the abusive case to 2 candidates:
> * ShingleFilter which sets the position length attribute to the length of the 
> shingle.
> * CJKBigramFilter which always sets the position length attribute to 2.
> I don't think these filters should set the position length at all so the best 
> would be to remove the attribute from these token filters but this could 
> break BWC.
> Though this is a serious bug since shingles and cjk bigram now produce 
> invalid queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7705) Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the max token length

2017-02-24 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7705?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-7705:
---
Attachment: LUCENE-7705.patch

Oops, forgot to "git add" on the new test file.

> Allow CharTokenizer-derived tokenizers and KeywordTokenizer to configure the 
> max token length
> -
>
> Key: LUCENE-7705
> URL: https://issues.apache.org/jira/browse/LUCENE-7705
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-7705.patch, LUCENE-7705.patch
>
>
> SOLR-10186
> [~erickerickson]: Is there a good reason that we hard-code a 256 character 
> limit for the CharTokenizer? In order to change this limit it requires that 
> people copy/paste the incrementToken into some new class since incrementToken 
> is final.
> KeywordTokenizer can easily change the default (which is also 256 bytes), but 
> to do so requires code rather than being able to configure it in the schema.
> For KeywordTokenizer, this is Solr-only. For the CharTokenizer classes 
> (WhitespaceTokenizer, UnicodeWhitespaceTokenizer and LetterTokenizer) 
> (Factories) it would take adding a c'tor to the base class in Lucene and 
> using it in the factory.
> Any objections?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10147) Admin UI -> Cloud -> Graph: Impossible to see shard state

2017-02-24 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883181#comment-15883181
 ] 

Amrit Sarkar commented on SOLR-10147:
-

Glad you found it useful.

> Admin UI -> Cloud -> Graph: Impossible to see shard state
> -
>
> Key: SOLR-10147
> URL: https://issues.apache.org/jira/browse/SOLR-10147
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5
>
> Attachments: color_and_style.png, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, screenshot-4.png, screenshot-5.png, screenshot-6.png, 
> SOLR-10147.patch, SOLR-10147.patch, SOLR-10147-v1.patch
>
>
> Currently in the Cloud -> Graph view there is a legend with color codes, but 
> that is for replicas only.
> We need a way to quickly see the state of the shard, in particular if it is 
> active or inactive. For testing, create a collection, then call SPLITSHARD on 
> shard1, and you'll end up with shards {{shard1}}, {{shard1_0}} and 
> {{shard1_1}}. It is not possible to see which one is active or inactive.
> Also, the replicas belonging to the inactive shard are still marked with 
> green "Active", while in reality they are "Inactive".
> The simplest would be to add a new state "Inactive" with color e.g. blue, 
> which would be used on both shard and replica level. But since an inactive 
> replica could also be "Gone" or "Down", there should be some way to indicate 
> both at the same time...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7708) Track PositionLengthAttribute abuse

2017-02-24 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883169#comment-15883169
 ] 

Steve Rowe commented on LUCENE-7708:


+1 to the idea, but some tests are failing with the patch:

{noformat}
   [junit4] Tests with failures [seed: 4D8AED66905F8617]:
   [junit4]   - 
org.apache.lucene.analysis.shingle.ShingleFilterTest.testOutputUnigramsIfNoShinglesSingleTokenCase
   [junit4]   - 
org.apache.lucene.analysis.shingle.ShingleFilterTest.testOutputUnigramsIfNoShinglesWithMultipleInputTokens
   [junit4]   - 
org.apache.lucene.analysis.shingle.ShingleAnalyzerWrapperTest.testOutputUnigramsIfNoShinglesSingleToken
   [junit4]   - 
org.apache.lucene.analysis.shingle.TestShingleFilterFactory.testOutputUnigramsIfNoShingles
{noformat}

> Track PositionLengthAttribute abuse
> ---
>
> Key: LUCENE-7708
> URL: https://issues.apache.org/jira/browse/LUCENE-7708
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser, modules/analysis
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7708.patch
>
>
> Some token filters uses the position length attribute of the token stream to 
> encode the number of terms they put in a single token. 
> This breaks the query parsing because it creates disconnected graph. 
> I've tracked down the abusive case to 2 candidates:
> * ShingleFilter which sets the position length attribute to the length of the 
> shingle.
> * CJKBigramFilter which always sets the position length attribute to 2.
> I don't think these filters should set the position length at all so the best 
> would be to remove the attribute from these token filters but this could 
> break BWC.
> Though this is a serious bug since shingles and cjk bigram now produce 
> invalid queries.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10201) Admin UI: Add Collection "creates collection", "Connection to Solr lost", when replicationFactor>1

2017-02-24 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883168#comment-15883168
 ] 

Amrit Sarkar commented on SOLR-10201:
-

[~erickerickson] [~upayavira] for async calls we already have this option: 
https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-AsynchronousCalls,
 One new parameter to the heavyweight apis and we can fetch 'check status' 
content from zk too.

I will give it a shot, incorporate async=true for both AddCollection and 
SplitShard http requests and try to incorporate a small dialog box displaying 
their status. Thanks.

> Admin UI: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> --
>
> Key: SOLR-10201
> URL: https://issues.apache.org/jira/browse/SOLR-10201
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.2
>Reporter: Amrit Sarkar
> Attachments: screenshot-1.png
>
>
> "Add Collection" fails miserably when replicationFactor >1.
> There must be a better way to handle the request we are making through JS.
> PF screenshot.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10201) Admin UI: Add Collection "creates collection", "Connection to Solr lost", when replicationFactor>1

2017-02-24 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883133#comment-15883133
 ] 

Upayavira commented on SOLR-10201:
--

[~sarkaramr...@gmail.com] look at places where the doNotIntercept is set, and 
do the same. It seems it is set in the services, in services.js, so you could 
set it for certain tasks.

[~erickerickson] JS is inherently async anyway so you could have a "request 
submitted" and a "request complete" feedback without any substantial change in 
the code. To add a 'check status' button would require the backend calls to be 
async, which would be considerably more work, if they aren't async already.


> Admin UI: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> --
>
> Key: SOLR-10201
> URL: https://issues.apache.org/jira/browse/SOLR-10201
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.2
>Reporter: Amrit Sarkar
> Attachments: screenshot-1.png
>
>
> "Add Collection" fails miserably when replicationFactor >1.
> There must be a better way to handle the request we are making through JS.
> PF screenshot.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-10205) Evaluate and reduce BlockCache store failures

2017-02-24 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10205?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-10205:
---

Assignee: Yonik Seeley

> Evaluate and reduce BlockCache store failures
> -
>
> Key: SOLR-10205
> URL: https://issues.apache.org/jira/browse/SOLR-10205
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>
> The BlockCache is written such that requests to cache a block 
> (BlockCache.store call) can fail, making caching less effective.  We should 
> evaluate the impact of this storage failure and potentially reduce the number 
> of storage failures.
> The implementation reserves a single block of memory.  In store, a block of 
> memory is allocated, and then a pointer is inserted into the underling map.  
> A block is only freed when the underlying map evicts the map entry.
> This means that when two store() operations are called concurrently (even 
> under low load), one can fail.  This is made worse by the fact that 
> concurrent maps typically tend to amortize the cost of eviction over many 
> keys (i.e. the actual size of the map can grow beyond the configured maximum 
> number of entries... both the older ConcurrentLinkedHashMap and newer 
> Caffeine do this).  When this is the case, store() won't be able to find a 
> free block of memory, even if there aren't any other concurrently operating 
> stores.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10205) Evaluate and reduce BlockCache store failures

2017-02-24 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-10205:
---

 Summary: Evaluate and reduce BlockCache store failures
 Key: SOLR-10205
 URL: https://issues.apache.org/jira/browse/SOLR-10205
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Yonik Seeley


The BlockCache is written such that requests to cache a block (BlockCache.store 
call) can fail, making caching less effective.  We should evaluate the impact 
of this storage failure and potentially reduce the number of storage failures.

The implementation reserves a single block of memory.  In store, a block of 
memory is allocated, and then a pointer is inserted into the underling map.  A 
block is only freed when the underlying map evicts the map entry.
This means that when two store() operations are called concurrently (even under 
low load), one can fail.  This is made worse by the fact that concurrent maps 
typically tend to amortize the cost of eviction over many keys (i.e. the actual 
size of the map can grow beyond the configured maximum number of entries... 
both the older ConcurrentLinkedHashMap and newer Caffeine do this).  When this 
is the case, store() won't be able to find a free block of memory, even if 
there aren't any other concurrently operating stores.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10201) Admin UI: Add Collection "creates collection", "Connection to Solr lost", when replicationFactor>1

2017-02-24 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883117#comment-15883117
 ] 

Erick Erickson commented on SOLR-10201:
---

My question is whether heavyweight operations like SPLITSHARD should even _try_ 
to be synchronous from the UI. Any timeout will be wrong for user N+1.

Would it make more sense to always make it async and provide some sort of 
"check status" button? Or even let the user say whether it should be sync or 
async with perhaps a timeout _they_ could specify for the op? I mention this 
last just for discussion, frankly I'd be fine with not giving them a choice; 
firing it async and providing a "check progress" button. It'd be cool if we had 
a progress bar, but only if it's easy.

FWIW, mostly random musings.

> Admin UI: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> --
>
> Key: SOLR-10201
> URL: https://issues.apache.org/jira/browse/SOLR-10201
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.2
>Reporter: Amrit Sarkar
> Attachments: screenshot-1.png
>
>
> "Add Collection" fails miserably when replicationFactor >1.
> There must be a better way to handle the request we are making through JS.
> PF screenshot.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1246 - Still Unstable

2017-02-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1246/

2 tests failed.
FAILED:  org.apache.lucene.search.TestShardSearching.testSimple

Error Message:
wrong hit docID expected:<11> but was:<3>

Stack Trace:
java.lang.AssertionError: wrong hit docID expected:<11> but was:<3>
at 
__randomizedtesting.SeedInfo.seed([D94E59EA03A58262:E1FD7D14245656B3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.apache.lucene.util.TestUtil.assertEquals(TestUtil.java:1050)
at 
org.apache.lucene.search.TestShardSearching.assertSame(TestShardSearching.java:387)
at 
org.apache.lucene.search.TestShardSearching.testSimple(TestShardSearching.java:236)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.update.TestInPlaceUpdatesDistrib.test

Error Message:
'sanitycheck' results against client: 
org.apache.solr.client.solrj.impl.HttpSolrClient@5c6917db (not leader) wrong 
[docid] for SolrDocument{id=0, 
id_field_copy_that_does_not_support_in_place_update_s=0, title_s=title0, 
id_i=0, inplace_updatable_float=101.0, _version_=1560229605069029376, 

[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2017-02-24 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883034#comment-15883034
 ] 

Alexandre Rafalovitch commented on SOLR-6806:
-

[~risdenk] Do the jars have to be all together in the solrj-lib though? I was 
trying to say that the jars were already present but in different directories 
(web-inf, dist, etc). The important knowledge is *which jars* are the minimal 
set required for SolrJ. So, if there is a document that lists them clearly, it 
may be enough. And I am guessing that document is already somewhere in the 
build instructions, we can just expose it in different ways from copying actual 
jars.

And if you do uber-jar for solrj, you could still pull those jars automatically 
from wherever the primary copies are.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
> Attachments: solr-zip-docs-extracted.png, solr-zip-extract-graph.png
>
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7710) BlockPackedReader to throw better exception

2017-02-24 Thread Mike Drob (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated LUCENE-7710:
--
Attachment: LUCENE-7710.patch

Attaching simple patch to include more detail in the exception thrown. All 
tests under lucene/core passed locally for me.

I think we should apply this to both trunk and branch_6x.

[~mikemccand] - Want to take a look? I feel like I'm playing whack-a-mole with 
improving these exceptions individually as I find them in my logs.

> BlockPackedReader to throw better exception
> ---
>
> Key: LUCENE-7710
> URL: https://issues.apache.org/jira/browse/LUCENE-7710
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 4.10.3
>Reporter: Mike Drob
> Attachments: LUCENE-7710.patch
>
>
> BlockPackedReader doesn't tell us which file we failed reading. Here's a 
> stack trace from a 4.10.3 install, but it applies to trunk as well.
> {noformat}
> org.apache.solr.common.SolrException; null:java.io.IOException: Corrupted
> at 
> org.apache.lucene.util.packed.BlockPackedReader.(BlockPackedReader.java:56)
> at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.loadNumeric(Lucene42DocValuesProducer.java:204)
> at 
> org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.getNumeric(Lucene42DocValuesProducer.java:174)
> at 
> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.getNumeric(PerFieldDocValuesFormat.java:248)
> at 
> org.apache.lucene.index.SegmentCoreReaders.getNumericDocValues(SegmentCoreReaders.java:194)
> at 
> org.apache.lucene.index.SegmentReader.getNumericDocValues(SegmentReader.java:229)
> at org.apache.lucene.search.FieldCacheImpl.getLongs(FieldCacheImpl.java:883)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10201) Admin UI: Add Collection "creates collection", "Connection to Solr lost", when replicationFactor>1

2017-02-24 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15883020#comment-15883020
 ] 

Amrit Sarkar commented on SOLR-10201:
-

Got it, thanks! 

HTTP requests take more/long time when there are more than one node is involved 
in heavy action (create directories, copying configs to ZK, copy data over 
nodes). *AddCollection* and *SplitShard* are definitely the two of them. Should 
we set _doNotTimeout_ for _AddCollection_, and give the user a boolean option 
for the same? or we set it at our end, in the JS api call?

> Admin UI: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> --
>
> Key: SOLR-10201
> URL: https://issues.apache.org/jira/browse/SOLR-10201
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.2
>Reporter: Amrit Sarkar
> Attachments: screenshot-1.png
>
>
> "Add Collection" fails miserably when replicationFactor >1.
> There must be a better way to handle the request we are making through JS.
> PF screenshot.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7710) BlockPackedReader to throw better exception

2017-02-24 Thread Mike Drob (JIRA)
Mike Drob created LUCENE-7710:
-

 Summary: BlockPackedReader to throw better exception
 Key: LUCENE-7710
 URL: https://issues.apache.org/jira/browse/LUCENE-7710
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 4.10.3
Reporter: Mike Drob


BlockPackedReader doesn't tell us which file we failed reading. Here's a stack 
trace from a 4.10.3 install, but it applies to trunk as well.

{noformat}
org.apache.solr.common.SolrException; null:java.io.IOException: Corrupted
at 
org.apache.lucene.util.packed.BlockPackedReader.(BlockPackedReader.java:56)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.loadNumeric(Lucene42DocValuesProducer.java:204)
at 
org.apache.lucene.codecs.lucene42.Lucene42DocValuesProducer.getNumeric(Lucene42DocValuesProducer.java:174)
at 
org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.getNumeric(PerFieldDocValuesFormat.java:248)
at 
org.apache.lucene.index.SegmentCoreReaders.getNumericDocValues(SegmentCoreReaders.java:194)
at 
org.apache.lucene.index.SegmentReader.getNumericDocValues(SegmentReader.java:229)
at org.apache.lucene.search.FieldCacheImpl.getLongs(FieldCacheImpl.java:883)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+155) - Build # 2933 - Still Unstable!

2017-02-24 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2933/
Java: 32bit/jdk-9-ea+155 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.lucene.search.TestShardSearching.testSimple

Error Message:
wrong hit docID expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: wrong hit docID expected:<2> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([D0AB99D3EEAE9009:E818BD2DC95D44D8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.apache.lucene.util.TestUtil.assertEquals(TestUtil.java:1051)
at 
org.apache.lucene.search.TestShardSearching.assertSame(TestShardSearching.java:387)
at 
org.apache.lucene.search.TestShardSearching.testSimple(TestShardSearching.java:236)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 764 lines...]
   [junit4] Suite: org.apache.lucene.search.TestShardSearching
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestShardSearching 
-Dtests.method=testSimple -Dtests.seed=D0AB99D3EEAE9009 -Dtests.multiplier=3 
-Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Pacific/Saipan 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] 

[jira] [Commented] (SOLR-10201) Admin UI: Add Collection "creates collection", "Connection to Solr lost", when replicationFactor>1

2017-02-24 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882965#comment-15882965
 ] 

Upayavira commented on SOLR-10201:
--

The timeout is set for all components. See js/angular/app.js. see 
.factory('httpInterceptor'). That's where the timeout is set.

Also note, in that same method, doNotIntercept. This is an example of how an 
caller can signal a specific behaviour to the interceptor. So, you could have a 
doNotTimeout option that simply avoids setting the "connectionStatusInactive" 
timeout.

> Admin UI: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> --
>
> Key: SOLR-10201
> URL: https://issues.apache.org/jira/browse/SOLR-10201
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.2
>Reporter: Amrit Sarkar
> Attachments: screenshot-1.png
>
>
> "Add Collection" fails miserably when replicationFactor >1.
> There must be a better way to handle the request we are making through JS.
> PF screenshot.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10201) Admin UI: Add Collection "creates collection", "Connection to Solr lost", when replicationFactor>1

2017-02-24 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882909#comment-15882909
 ] 

Amrit Sarkar commented on SOLR-10201:
-

[~upayavira], thank you for the clarification. 

I understood that point, I was trying to add a button for SPLITSHARD and was 
running into "connection loss" every time I press the button. This is a piece 
of code here;
{noformat}
$scope.deleteReplica = function(replica) {
Collections.deleteReplica({collection: replica.collection, 
shard:replica.shard, replica:replica.name}, function(data) {
  replica.deleted = true;
  $timeout(function() {
$scope.refresh();
  }, 2000);
});
  }
{noformat}
After 2000ms, refresh the page. I went deeper into dev-console on Chrome, was 
not able to find where we are setting out timeout? or 2sec is the timeout? or 
it has nothing to do with the JS call and timeouts are API specific?

> Admin UI: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> --
>
> Key: SOLR-10201
> URL: https://issues.apache.org/jira/browse/SOLR-10201
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.2
>Reporter: Amrit Sarkar
> Attachments: screenshot-1.png
>
>
> "Add Collection" fails miserably when replicationFactor >1.
> There must be a better way to handle the request we are making through JS.
> PF screenshot.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2017-02-24 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882864#comment-15882864
 ] 

Kevin Risden commented on SOLR-8593:


[~joel.bernstein] - That sounds reasonable. Would be good to get the bulk of 
this into 6.5.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>  Components: Parallel SQL
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-8593.patch, SOLR-8593.patch, SOLR-8593.patch
>
>
>The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8440) Script support for enabling basic auth

2017-02-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882863#comment-15882863
 ] 

Jan Høydahl commented on SOLR-8440:
---

Or perhaps multiple sub commands for the various steps
{noformat}
bin/solr auth enable [-f] -type    # -f Force to change 
from existing?
# This sets an empty {{authentication}} object with class only in 
security.json, now you can start using the REST API if you wish
bin/solr auth [--user=solr:SolrRocks] setuser 

[jira] [Commented] (SOLR-9640) Support PKI authentication and SSL in standalone-mode master/slave auth with local security.json

2017-02-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882862#comment-15882862
 ] 

ASF subversion and git services commented on SOLR-9640:
---

Commit 024a39399dbb77678d06f70029575e0e66ded4b4 in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=024a393 ]

SOLR-9640: Support PKI authentication and SSL in standalone-mode master/slave 
auth with local security.json

(cherry picked from commit 95d6fc2)


> Support PKI authentication and SSL in standalone-mode master/slave auth with 
> local security.json
> 
>
> Key: SOLR-9640
> URL: https://issues.apache.org/jira/browse/SOLR-9640
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: authentication, pki
> Fix For: 6.5, master (7.0)
>
> Attachments: SOLR-9640.patch, SOLR-9640.patch, SOLR-9640.patch, 
> SOLR-9640.patch, SOLR-9640.patch
>
>
> While working with SOLR-9481 I managed to secure Solr standalone on a 
> single-node server. However, when adding 
> {{=localhost:8081/solr/foo,localhost:8082/solr/foo}} to the request, I 
> get 401 error. This issue will fix PKI auth to work for standalone, which 
> should automatically make both sharding and master/slave index replication 
> work.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2017-02-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882859#comment-15882859
 ] 

Jan Høydahl commented on SOLR-6806:
---

The downloader just needs to 
* Know what version we have, so we can download corresponding ver
* Find nearest Apache mirror with 
http://www.apache.org/dyn/closer.lua/lucene/solr?preferred=true
* Attempt download of archive, if 404, revert to main dist site, then to archive
* Download checksum from the archive and validate
* Unzip

The bigger issue here I guess is to start releasing more artifacts. We then get 
source.zip/tgz, bin.zip/tgz, contrib.zip/tgz + .asc, .sha files for each 
version and more work for RMs.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
> Attachments: solr-zip-docs-extracted.png, solr-zip-extract-graph.png
>
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2017-02-24 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882850#comment-15882850
 ] 

Kevin Risden commented on SOLR-6806:


One of the main issues with solrj-lib is that you have to include it all if you 
want a third party program to work with Solr that doesn't have maven. An 
example is the JDBC piece in Solr. There was an issue about trying to 
distribute a single jar for SolrJ so that might even help having solrj-lib? 
SOLR-8680

Maven Central doesn't help download all dependencies of solr-solrj if you 
aren't using Maven to compile. There is no shaded or uber jar on Maven for 
solr-solrj.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
> Attachments: solr-zip-docs-extracted.png, solr-zip-extract-graph.png
>
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6806) Reduce the size of the main Solr binary download

2017-02-24 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882850#comment-15882850
 ] 

Kevin Risden edited comment on SOLR-6806 at 2/24/17 3:12 PM:
-

One of the main issues with solrj-lib is that you have to include it all if you 
want a third party program to work with Solr that doesn't have maven. An 
example is the JDBC piece in Solr. There was an issue about trying to 
distribute a single jar for SolrJ so that might even help to not require having 
solrj-lib? SOLR-8680

Maven Central doesn't help download all dependencies of solr-solrj if you 
aren't using Maven to compile. There is no shaded or uber jar on Maven for 
solr-solrj.


was (Author: risdenk):
One of the main issues with solrj-lib is that you have to include it all if you 
want a third party program to work with Solr that doesn't have maven. An 
example is the JDBC piece in Solr. There was an issue about trying to 
distribute a single jar for SolrJ so that might even help having solrj-lib? 
SOLR-8680

Maven Central doesn't help download all dependencies of solr-solrj if you 
aren't using Maven to compile. There is no shaded or uber jar on Maven for 
solr-solrj.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
> Attachments: solr-zip-docs-extracted.png, solr-zip-extract-graph.png
>
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10194) Unable to use the UninvertedField implementation with legacy facets

2017-02-24 Thread Victor Igumnov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882845#comment-15882845
 ] 

Victor Igumnov commented on SOLR-10194:
---

I actually found the core cause of the performance issue, it was due to too 
many segments on disk. Minimizing the amount of segments on disk brought the 
performance on par with solr 4.10. However, this is still a legitimate bug 
where the use of the UninvertedField implementation is blocked from use without 
the activation of facet.distrib.mco=true. 

I haven't tried docValues with the minimized amount of segments yet, but our 
index leans toward the static side of things so the UninvertedField 
implementation at query time is the ideal use case. 

> Unable to use the UninvertedField implementation with legacy facets
> ---
>
> Key: SOLR-10194
> URL: https://issues.apache.org/jira/browse/SOLR-10194
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.2, 6.3, 6.4.1
> Environment: Linux
>Reporter: Victor Igumnov
>Priority: Minor
>  Labels: easyfix
>
> FacetComponent's method "modifyRequestForFieldFacets" modifies the 
> distributed facet request and sets the mincount count to zero which then the 
> SimpleFacets implementation is unable to get into the UIF code block when 
> facet.method=uif is applied. The workaround which I found is to use 
> facet.distrib.mco=true which sets the mincount to one instead of zero. 
> Working:
> http://somehost:9100/solr/collection/select?facet.method=uif=attribute=*:*=true=true=true
>  
> None-Working:
> http://somehost:9100/solr/collection/select?facet.method=uif=attribute=*:*=true=true=false
> Semi-working when it isn't a distributed call:
> http://somehost:9100/solr/collection/select?facet.method=uif=attribute=*:*=true=true=false=false
> Just make sure to run it on a multi-shard setup. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10158) MMapDirectoryFactory support for "preload" option (LUCENE-6549)

2017-02-24 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10158:

Attachment: (was: SOLR-10158.patch)

> MMapDirectoryFactory support for "preload" option (LUCENE-6549)
> ---
>
> Key: SOLR-10158
> URL: https://issues.apache.org/jira/browse/SOLR-10158
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Amrit Sarkar
>Priority: Trivial
> Attachments: SOLR-10158.patch
>
>
> Lucene 5.3 added a new preload option to MMapDirectory (see LUCENE-6549)
> MMapDirectoryFactory needs to be updated to offer this as a config option.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10158) MMapDirectoryFactory support for "preload" option (LUCENE-6549)

2017-02-24 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10158?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10158:

Attachment: SOLR-10158.patch

> MMapDirectoryFactory support for "preload" option (LUCENE-6549)
> ---
>
> Key: SOLR-10158
> URL: https://issues.apache.org/jira/browse/SOLR-10158
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Amrit Sarkar
>Priority: Trivial
> Attachments: SOLR-10158.patch
>
>
> Lucene 5.3 added a new preload option to MMapDirectory (see LUCENE-6549)
> MMapDirectoryFactory needs to be updated to offer this as a config option.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2017-02-24 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882844#comment-15882844
 ] 

Alexandre Rafalovitch commented on SOLR-6806:
-

Do we have a document on what libs are supposed to go into what directory. I 
admit this is a total black box issue for me.

I do agree with Jan though that ease of use is the primary concern. So, I would 
focus first on the things that are just not used at all or not used by the 
people running Solr as the search engine (javadocs, test libraries, maybe some 
of the contribs that are not trivial to integrate and we don't provide examples 
for, etc).

DIH - to me -is a complex story. It really needs to be cleaned up/replaced 
instead of making it more core. But the discussions don't really get anywhere 
so far. 

With solrj-lib, could we instead have a README file that points to what jars 
are required from other already-existing locations? Because the easiest way to 
get SolrJ is with Maven dependency anyway (right?) and that already manages the 
dependencies by the reference.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
> Attachments: solr-zip-docs-extracted.png, solr-zip-extract-graph.png
>
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10204) Compress the licenses into an inner archive for Solr binary download

2017-02-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882837#comment-15882837
 ] 

Jan Høydahl commented on SOLR-10204:


I'd use .zip since you'd expect code if it was jar, and document in README or 
NOTICE that you can unzip it using {{jar -xf}}.

> Compress the licenses into an inner archive for Solr binary download
> 
>
> Key: SOLR-10204
> URL: https://issues.apache.org/jira/browse/SOLR-10204
> Project: Solr
>  Issue Type: Sub-task
>  Components: Build
>Affects Versions: master (7.0)
>Reporter: Alexandre Rafalovitch
>
> Solr has to ship with the software licenses. However, they are there just for 
> reference and take valuable decompressed disk space and decompression time. 
> They could instead be shipped as an inner archive, so those who want to check 
> them still can by unarchiving them.
> The main question is: whether the inner archive format should match the outer 
> archive format for different platforms? Or whether it should use the neutral 
> archive format such as .jar? Or just stick to .zip?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2017-02-24 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882836#comment-15882836
 ] 

Shawn Heisey commented on SOLR-6806:


[~janhoy], you make some good points.  If we can build a reliable and fully 
scripted download mechanism, we solve multiple problems.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
> Attachments: solr-zip-docs-extracted.png, solr-zip-extract-graph.png
>
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2017-02-24 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882832#comment-15882832
 ] 

Shawn Heisey commented on SOLR-6806:


bq. I think the whole solrj-client folder is full of duplicate libraries.

Yes, dist/solrj-lib is comprised of  duplicates, taking 6MB of space.  It would 
be one of the things copied by the makedist script I mentioned.

I still think that the main DIH code and jar should be moved into core.  The 
extras probably should remain outside, especially because the dependencies are 
not trivial in size.

I do find it confusing that the contrib jars are in dist, but their 
dependencies are in contrib.  Seems like they should be together in contrib.

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
> Attachments: solr-zip-docs-extracted.png, solr-zip-extract-graph.png
>
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2017-02-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882830#comment-15882830
 ] 

Jan Høydahl commented on SOLR-6806:
---

A problem with {{makedist.sh}} is that you don't support the usecase where 
people have scripted download of solr-x.y.tgz, untar, copy dist to somewhere.

Regarding making more release artifacts such as {{solr-contribs-6.4.1.tgz}} 
etc, that would help a lot on download size but hurt ease of testing all 
features for newbies. That could be easened by a {{java -jar bin/downloader.jar 
contrib}} command bundled that would go online, fetch the corresponding contrib 
artifact and unzip it into {{$SOLR_TIP/contrib}}. The same approach could be 
taken for compiling the dist folder, getting the test dependencies, javadocs mm 
without needing to implement a full plugin architecture from day one. I still 
think the ideas in SOLR-5103 is a superior long-term plan though :)

> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
> Attachments: solr-zip-docs-extracted.png, solr-zip-extract-graph.png
>
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10204) Compress the licenses into an inner archive for Solr binary download

2017-02-24 Thread Alexandre Rafalovitch (JIRA)
Alexandre Rafalovitch created SOLR-10204:


 Summary: Compress the licenses into an inner archive for Solr 
binary download
 Key: SOLR-10204
 URL: https://issues.apache.org/jira/browse/SOLR-10204
 Project: Solr
  Issue Type: Sub-task
Affects Versions: master (7.0)
Reporter: Alexandre Rafalovitch


Solr has to ship with the software licenses. However, they are there just for 
reference and take valuable decompressed disk space and decompression time. 
They could instead be shipped as an inner archive, so those who want to check 
them still can by unarchiving them.

The main question is: whether the inner archive format should match the outer 
archive format for different platforms? Or whether it should use the neutral 
archive format such as .jar? Or just stick to .zip?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10203) Remove dist/test-framework from the binary download archive

2017-02-24 Thread Alexandre Rafalovitch (JIRA)
Alexandre Rafalovitch created SOLR-10203:


 Summary: Remove dist/test-framework from the binary download 
archive
 Key: SOLR-10203
 URL: https://issues.apache.org/jira/browse/SOLR-10203
 Project: Solr
  Issue Type: Sub-task
Affects Versions: master (7.0)
Reporter: Alexandre Rafalovitch
Assignee: Alexandre Rafalovitch
Priority: Minor


Libraries in the dist/test-framework are shipped with every copy of Solr 
binary, yet they are not used anywhere directly. They take approximately 10 
MBytes. 

Remove the directory and provide guidance in a README file on how to get them 
for those people who are writing their own testing solutions against Solr.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2017-02-24 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882806#comment-15882806
 ] 

Alexandre Rafalovitch commented on SOLR-6806:
-

I think the whole solrj-client folder is full of duplicate libraries. Perhaps 
it is necessary, but it is certainly something to keep a note about.


> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
> Attachments: solr-zip-docs-extracted.png, solr-zip-extract-graph.png
>
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6806) Reduce the size of the main Solr binary download

2017-02-24 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882799#comment-15882799
 ] 

Shawn Heisey commented on SOLR-6806:


Glad to see some ideas being generated, and some work getting done.  SOLR-9450 
will make a big difference in the download size and a HUGE difference in how 
long archive extraction takes.

Previous comments cover the pain points pretty well.  Here's what I see as 
remaining low-hanging fruit:

 * Eliminate duplicate jars where possible.  Adding a "makedist" script to copy 
jars from disparate locations to dist is probably a good idea.
 * Compress the licenses into an inner archive so archive extraction is 
speedier.
 * Split the test framework and dependencies only required for testing into a 
separate download.
 * Consider splitting large things currently included in the webapp, like the 
hadoop integration, into a separate download.
 * Consider splitting contrib modules and dependencies into a separate download.
 * Decide whether the splits mentioned above would all be in the same file, or 
separate files.


> Reduce the size of the main Solr binary download
> 
>
> Key: SOLR-6806
> URL: https://issues.apache.org/jira/browse/SOLR-6806
> Project: Solr
>  Issue Type: Task
>  Components: Build
>Affects Versions: 5.0
>Reporter: Shawn Heisey
> Attachments: solr-zip-docs-extracted.png, solr-zip-extract-graph.png
>
>
> There has been a lot of recent discussion about how large the Solr download 
> is, and how to reduce its size.  The last release (4.10.2) weighs in at 143MB 
> for the tar and 149MB for the zip.
> Most users do not need the full download.  They may never need contrib 
> features, or they may only need one or two, with DIH being the most likely 
> choice.  They could likely get by with a download that's less than 40 MB.
> Our primary competition has a 29MB zip download for the release that's 
> current right now, and not too long ago, that was about 20MB.  I didn't look 
> very deep, but any additional features that might be available for download 
> were not immediately apparent on their website.  I'm sure they exist, but I 
> would guess that most users never need those features, so most users never 
> even see them.
> Solr, by contrast, has everything included ... a "kitchen sink" approach. 
> Once you get past the long download time and fire up the example, you're 
> presented with configs that include features you're likely to never use.
> Although this offers maximum flexibility, I think it also serves to cause 
> confusion in a new user.
> A much better option would be to create a core download that includes only a 
> minimum set of features, probably just the war, the example servlet 
> container, and an example config that only uses the functionality present in 
> the war.  We can create additional downloads that offer additional 
> functionality and configs ... DIH would be a very small addon that would 
> likely be downloaded frequently.
> SOLR-5103 describes a plugin infrastructure which would make it very easy to 
> offer a small core download and then let the user download additional 
> functionality using scripts or the UI.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7707) Only assign ScoreDoc#shardIndex if it was already assigned to non default (-1) value

2017-02-24 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15882798#comment-15882798
 ] 

Adrien Grand commented on LUCENE-7707:
--

Let's make sure that the shard index is not -1 if {{setShardIndex}} is false? 
Otherwise + 1.

> Only assign ScoreDoc#shardIndex if it was already assigned to non default 
> (-1) value
> 
>
> Key: LUCENE-7707
> URL: https://issues.apache.org/jira/browse/LUCENE-7707
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: master (7.0), 6.5.0
>
> Attachments: LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, 
> LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch, LUCENE-7707.patch
>
>
> When you use TopDocs.merge today it always overrides the ScoreDoc#shardIndex 
> value. The assumption that is made here is that all shard results are merges 
> at once which is not necessarily the case. If for instance incremental merge 
> phases are applied the shard index doesn't correspond to the index in the 
> outer TopDocs array. To make this a backwards compatible but yet 
> non-controversial change we could change the internals of TopDocs#merge to 
> only assign this value unless it's not been assigned before to a non-default 
> (-1) value to allow multiple or sparse top docs merging.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >