[JENKINS] Lucene-Solr-SmokeRelease-6.4 - Build # 15 - Still Failing

2017-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.4/15/

No tests ran.

Build Log:
[...truncated 41918 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.4/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.4/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 260 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.4/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.4/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (29.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.4.2-src.tgz...
   [smoker] 30.6 MB in 0.03 sec (1172.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.4.2.tgz...
   [smoker] 65.0 MB in 0.06 sec (1158.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.4.2.zip...
   [smoker] 75.3 MB in 0.06 sec (1160.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.4.2.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6206 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.4.2.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6206 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.4.2-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 229 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.4/dev-tools/scripts/smokeTestRelease.py",
 line 1472, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.4/dev-tools/scripts/smokeTestRelease.py",
 line 1416, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.4/dev-tools/scripts/smokeTestRelease.py",
 line 1454, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, gitRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.4/dev-tools/scripts/smokeTestRelease.py",
 line 622, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
gitRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.4/dev-tools/scripts/smokeTestRelease.py",
 line 768, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(version, unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.4/dev-tools/scripts/smokeTestRelease.py",
 line 1392, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?
   [smoker] Releases that don't seem to be tested:
   [smoker]   6.4.1

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.4/build.xml:571: 
exec returned: 1

Total time: 39 minutes 22 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3863 - Unstable!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3863/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.CustomCollectionTest.testCustomCollectionsAPI

Error Message:
Could not find collection : implicitcoll

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : implicitcoll
at 
__randomizedtesting.SeedInfo.seed([7D24727396243E77:17C5FC18ABBE880F]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:194)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:245)
at 
org.apache.solr.cloud.CustomCollectionTest.testCustomCollectionsAPI(CustomCollectionTest.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter

Error Message:
Collection not found: withShardField

Stack Trace:

[jira] [Commented] (SOLR-10134) EmbeddedSolrServer does not support SchemaAPI

2017-02-28 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889665#comment-15889665
 ] 

Mikhail Khludnev commented on SOLR-10134:
-

noticing warnings below, will check them soon
{code}
 [ecj-lint] 1. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\client\solrj\embedded\EmbeddedSolrServer.java
 (at line 203)
 [ecj-lint] new JavaBinCodec(resolver) {
 [ecj-lint]
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocument(SolrDocument doc) {
 [ecj-lint]   callback.streamSolrDocument(doc);
 [ecj-lint]   //super.writeSolrDocument( doc, fields );
 [ecj-lint] }
 [ecj-lint]
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocumentList(SolrDocumentList 
docs) throws IOException {
 [ecj-lint]   if (docs.size() > 0) {
 [ecj-lint] SolrDocumentList tmp = new SolrDocumentList();
 [ecj-lint] tmp.setMaxScore(docs.getMaxScore());
 [ecj-lint] tmp.setNumFound(docs.getNumFound());
 [ecj-lint] tmp.setStart(docs.getStart());
 [ecj-lint] docs = tmp;
 [ecj-lint]   }
 [ecj-lint]   callback.streamDocListInfo(docs.getNumFound(), 
docs.getStart(), docs.getMaxScore());
 [ecj-lint]   super.writeSolrDocumentList(docs);
 [ecj-lint] }
 [ecj-lint]
 [ecj-lint]   }.setWritableDocFields(resolver). 
marshal(rsp.getValues(), out);
 [ecj-lint] 

 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\client\solrj\embedded\EmbeddedSolrServer.java
 (at line 227)
 [ecj-lint] return (NamedList) new 
JavaBinCodec(resolver).unmarshal(in);
 [ecj-lint]^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
{code}

> EmbeddedSolrServer does not support SchemaAPI
> -
>
> Key: SOLR-10134
> URL: https://issues.apache.org/jira/browse/SOLR-10134
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server, SolrJ
>Affects Versions: 6.4.1
>Reporter: Robert Alexandersson
>  Labels: test-driven
> Attachments: SOLR-10134.patch, SOLR-10134.patch, SOLR-10134.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The EmbeddedSolrServer server does not support calls to the POST methods of 
> SchemaAPI using SolRJ api. The reason is that the httpMethod param is never 
> set by the EmbeddedSolrServer#request(SolrRequest, String) and this is later 
> required by the SchemaHandler class that actually performs the call at 
> SchemaHandler#handleRequestBody(SolrQueryRequest, SolrQueryResponse). 
> Proposal is to enhance the EmbeddedSolrServer to forward the httpMethod at 
> aprox row 174 with the following: "req.getContext().put("httpMethod", 
> request.getMethod().name());". This change requires the Factory methods of 
> SolrJ to add the intended method to be used example : new 
> SchemaRequest.AddField() should append the POST method similar to how the 
> SchemaRequest.Field appends the GET method.
> I have written a separate EmbeddedSolrServer that replaces the one in SolR. 
> It works for now and fields can be created on the fly using the SchemaAPI of 
> the solrj client, but would like to be able to remove this workaround.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10134) EmbeddedSolrServer does not support SchemaAPI

2017-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889662#comment-15889662
 ] 

ASF subversion and git services commented on SOLR-10134:


Commit bce1417fceeed2054f16565e96dc49268c1b2ea1 in lucene-solr's branch 
refs/heads/branch_6x from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bce1417 ]

SOLR-10134: EmbeddedSolrServer handles SchemaAPI requests


> EmbeddedSolrServer does not support SchemaAPI
> -
>
> Key: SOLR-10134
> URL: https://issues.apache.org/jira/browse/SOLR-10134
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server, SolrJ
>Affects Versions: 6.4.1
>Reporter: Robert Alexandersson
>  Labels: test-driven
> Attachments: SOLR-10134.patch, SOLR-10134.patch, SOLR-10134.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The EmbeddedSolrServer server does not support calls to the POST methods of 
> SchemaAPI using SolRJ api. The reason is that the httpMethod param is never 
> set by the EmbeddedSolrServer#request(SolrRequest, String) and this is later 
> required by the SchemaHandler class that actually performs the call at 
> SchemaHandler#handleRequestBody(SolrQueryRequest, SolrQueryResponse). 
> Proposal is to enhance the EmbeddedSolrServer to forward the httpMethod at 
> aprox row 174 with the following: "req.getContext().put("httpMethod", 
> request.getMethod().name());". This change requires the Factory methods of 
> SolrJ to add the intended method to be used example : new 
> SchemaRequest.AddField() should append the POST method similar to how the 
> SchemaRequest.Field appends the GET method.
> I have written a separate EmbeddedSolrServer that replaces the one in SolR. 
> It works for now and fields can be created on the fly using the SchemaAPI of 
> the solrj client, but would like to be able to remove this workaround.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7719) UnifiedHighlighter doesn't handle some AutomatonQuery's with multi-byte chars

2017-02-28 Thread David Smiley (JIRA)
David Smiley created LUCENE-7719:


 Summary: UnifiedHighlighter doesn't handle some AutomatonQuery's 
with multi-byte chars
 Key: LUCENE-7719
 URL: https://issues.apache.org/jira/browse/LUCENE-7719
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Reporter: David Smiley


In MultiTermHighlighting, a CharacterRunAutomaton is being created that takes 
the result of AutomatonQuery.getAutomaton that in turn is byte oriented, not 
character oriented.  For ASCII terms, this is safe but it's not for multi-byte 
characters.  This is most likely going to rear it's head with a WildcardQuery, 
but due to special casing in MultiTermHighlighting, PrefixQuery isn't affected. 
 Nonetheless it'd be nice to get a general fix in so that MultiTermHighlighting 
can remove special cases for PrefixQuery and TermRangeQuery (both subclass 
AutomatonQuery).

AFAICT, this bug was likely in the PostingsHighlighter since inception.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-7717) UnifiedHighlighter doesn't highlight PrefixQuery with multi-byte chars

2017-02-28 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed LUCENE-7717.

Resolution: Fixed

> UnifiedHighlighter doesn't highlight PrefixQuery with multi-byte chars
> --
>
> Key: LUCENE-7717
> URL: https://issues.apache.org/jira/browse/LUCENE-7717
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.1, 6.3, 6.4.1
>Reporter: Dmitry Malinin
>Assignee: David Smiley
> Fix For: 6.4.2
>
> Attachments: LUCENE-7717.patch, LUCENE-7717.patch
>
>
> UnifiedHighlighter highlighter = new UnifiedHighlighter(null, new 
> StandardAnalyzer());
> Query query = new PrefixQuery(new Term("title", "я"));
> String testData = "я";
> Object snippet = highlighter.highlightWithoutSearcher(fieldName, query, 
> testData, 1);
> System.out.printf("testData=[%s] Query=%s snippet=[%s]\n", testData, query, 
> snippet==null?null:snippet.toString());



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7717) UnifiedHighlighter doesn't highlight PrefixQuery with multi-byte chars

2017-02-28 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-7717:
-
Affects Version/s: 5.1
Fix Version/s: 6.4.2
  Summary: UnifiedHighlighter doesn't highlight PrefixQuery with 
multi-byte chars  (was: UnifiedHighlighter don't work with russian PrefixQuery)

Closing.  I'll create a linked follow-up bug issue for WildcardQuery (also 
applies to Regexp) where we can discuss how to deal with that -- the more 
overall fix.  I don't think that one should hold up a 6.4.2.  It'll likely 
result in removing the PrefixQuery and TermRangeQuery sections in 
MultiTermHighlighting.

> UnifiedHighlighter doesn't highlight PrefixQuery with multi-byte chars
> --
>
> Key: LUCENE-7717
> URL: https://issues.apache.org/jira/browse/LUCENE-7717
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 5.1, 6.3, 6.4.1
>Reporter: Dmitry Malinin
>Assignee: David Smiley
> Fix For: 6.4.2
>
> Attachments: LUCENE-7717.patch, LUCENE-7717.patch
>
>
> UnifiedHighlighter highlighter = new UnifiedHighlighter(null, new 
> StandardAnalyzer());
> Query query = new PrefixQuery(new Term("title", "я"));
> String testData = "я";
> Object snippet = highlighter.highlightWithoutSearcher(fieldName, query, 
> testData, 1);
> System.out.printf("testData=[%s] Query=%s snippet=[%s]\n", testData, query, 
> snippet==null?null:snippet.toString());



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7717) UnifiedHighlighter don't work with russian PrefixQuery

2017-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889612#comment-15889612
 ] 

ASF subversion and git services commented on LUCENE-7717:
-

Commit 7467c369aaae5c17584360d57a3e6226ac57d817 in lucene-solr's branch 
refs/heads/branch_6_4 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7467c36 ]

LUCENE-7717: UnifiedHighlighter and PostingsHighlighter bug in PrefixQuery and 
TermRangeQuery for multi-byte text

(cherry picked from commit d9a2c64)


> UnifiedHighlighter don't work with russian PrefixQuery
> --
>
> Key: LUCENE-7717
> URL: https://issues.apache.org/jira/browse/LUCENE-7717
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 6.3, 6.4.1
>Reporter: Dmitry Malinin
>Assignee: David Smiley
> Attachments: LUCENE-7717.patch, LUCENE-7717.patch
>
>
> UnifiedHighlighter highlighter = new UnifiedHighlighter(null, new 
> StandardAnalyzer());
> Query query = new PrefixQuery(new Term("title", "я"));
> String testData = "я";
> Object snippet = highlighter.highlightWithoutSearcher(fieldName, query, 
> testData, 1);
> System.out.printf("testData=[%s] Query=%s snippet=[%s]\n", testData, query, 
> snippet==null?null:snippet.toString());



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7717) UnifiedHighlighter don't work with russian PrefixQuery

2017-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889607#comment-15889607
 ] 

ASF subversion and git services commented on LUCENE-7717:
-

Commit d9a2c64041067acf4f1d967e13ab7a045502ce1c in lucene-solr's branch 
refs/heads/branch_6x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d9a2c64 ]

LUCENE-7717: UnifiedHighlighter and PostingsHighlighter bug in PrefixQuery and 
TermRangeQuery for multi-byte text

(cherry picked from commit ec13032)


> UnifiedHighlighter don't work with russian PrefixQuery
> --
>
> Key: LUCENE-7717
> URL: https://issues.apache.org/jira/browse/LUCENE-7717
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 6.3, 6.4.1
>Reporter: Dmitry Malinin
>Assignee: David Smiley
> Attachments: LUCENE-7717.patch, LUCENE-7717.patch
>
>
> UnifiedHighlighter highlighter = new UnifiedHighlighter(null, new 
> StandardAnalyzer());
> Query query = new PrefixQuery(new Term("title", "я"));
> String testData = "я";
> Object snippet = highlighter.highlightWithoutSearcher(fieldName, query, 
> testData, 1);
> System.out.printf("testData=[%s] Query=%s snippet=[%s]\n", testData, query, 
> snippet==null?null:snippet.toString());



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7717) UnifiedHighlighter don't work with russian PrefixQuery

2017-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889604#comment-15889604
 ] 

ASF subversion and git services commented on LUCENE-7717:
-

Commit ec13032a948a29f69d50d41e4859fd38ed5ca377 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ec13032 ]

LUCENE-7717: UnifiedHighlighter and PostingsHighlighter bug in PrefixQuery and 
TermRangeQuery for multi-byte text


> UnifiedHighlighter don't work with russian PrefixQuery
> --
>
> Key: LUCENE-7717
> URL: https://issues.apache.org/jira/browse/LUCENE-7717
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 6.3, 6.4.1
>Reporter: Dmitry Malinin
>Assignee: David Smiley
> Attachments: LUCENE-7717.patch, LUCENE-7717.patch
>
>
> UnifiedHighlighter highlighter = new UnifiedHighlighter(null, new 
> StandardAnalyzer());
> Query query = new PrefixQuery(new Term("title", "я"));
> String testData = "я";
> Object snippet = highlighter.highlightWithoutSearcher(fieldName, query, 
> testData, 1);
> System.out.printf("testData=[%s] Query=%s snippet=[%s]\n", testData, query, 
> snippet==null?null:snippet.toString());



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.4 - Build # 18 - Still Unstable

2017-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.4/18/

5 tests failed.
FAILED:  org.apache.lucene.search.TestFuzzyQuery.testRandom

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([99F08A5B37C68D09]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestFuzzyQuery

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([99F08A5B37C68D09]:0)


FAILED:  org.apache.solr.cloud.hdfs.HdfsUnloadDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:46894: Error CREATEing SolrCore 
'test_unload_shard_and_collection_1': Unable to create core 
[test_unload_shard_and_collection_1] Caused by: Direct buffer memory

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:46894: Error CREATEing SolrCore 
'test_unload_shard_and_collection_1': Unable to create core 
[test_unload_shard_and_collection_1] Caused by: Direct buffer memory
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:610)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:279)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:268)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.UnloadDistributedZkTest.testUnloadShardAndCollection(UnloadDistributedZkTest.java:125)
at 
org.apache.solr.cloud.UnloadDistributedZkTest.test(UnloadDistributedZkTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Updated] (SOLR-9401) TestPKIAuthenticationPlugin NPE

2017-02-28 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-9401:
-
Attachment: SOLR-9401.patch

my bad , the run() method was invoked outside of the for loop

> TestPKIAuthenticationPlugin NPE
> ---
>
> Key: SOLR-9401
> URL: https://issues.apache.org/jira/browse/SOLR-9401
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Noble Paul
> Attachments: SOLR-9401.patch, SOLR-9401.patch, SOLR-9401.patch
>
>
> Failure from my Jenkins, doesn't reproduce for me (this is 
> {{tests-failures.txt}}):
> {noformat}
>   2> Creating dataDir: 
> /var/lib/jenkins/jobs/Lucene-Solr-tests-master/workspace/solr/build/solr-core/test/J7/temp/solr.security.TestPKIAuthenticationPlugi
> n_7AC33B2240CB767D-001/init-core-data-001
>   2> 14521 INFO  
> (SUITE-TestPKIAuthenticationPlugin-seed#[7AC33B2240CB767D]-worker) [] 
> o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (fal
> se) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, 
> clientAuth=NaN)
>   2> 14540 INFO  
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.SolrTestCaseJ4 ###Starting test
>   2> 15553 ERROR 
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.s.PKIAuthenticationPlugin No SolrAuth header present
>   2> 15843 ERROR 
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.s.PKIAuthenticationPlugin Invalid key request timestamp: 9 ,
>  received timestamp: 1470760833176 , TTL: 5000
>   2> 15843 INFO  
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.SolrTestCaseJ4 ###Ending test
>   2> NOTE: download the large Jenkins line-docs file by running 'ant 
> get-jenkins-line-docs' in the lucene directory.
>   2> NOTE: reproduce with: ant test  -Dtestcase=TestPKIAuthenticationPlugin 
> -Dtests.method=test -Dtests.seed=7AC33B2240CB767D -Dtests.slow=true -Dtests.li
> nedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=cs -Dtests.timezone=Europe/Tirane -Dtests.asserts=true 
> -Dtests.file.encoding=U
> TF-8
> [12:40:32.094] ERROR   1.35s J7  | TestPKIAuthenticationPlugin.test <<<
>> Throwable #1: java.lang.NullPointerException
>>at 
> __randomizedtesting.SeedInfo.seed([7AC33B2240CB767D:F29704F8EE371B85]:0)
>>at 
> org.apache.solr.security.TestPKIAuthenticationPlugin.test(TestPKIAuthenticationPlugin.java:144)
> [...]
>   2> 15867 INFO  
> (SUITE-TestPKIAuthenticationPlugin-seed#[7AC33B2240CB767D]-worker) [] 
> o.a.s.SolrTestCaseJ4 ###deleteCore
>   2> NOTE: leaving temporary files on disk at: 
> /var/lib/jenkins/jobs/Lucene-Solr-tests-master/workspace/solr/build/solr-core/test/J7/temp/solr.security.TestPKIAuthenticationPlugin_7AC33B2240CB767D-001
>   2> NOTE: test params are: codec=Asserting(Lucene62): {}, docValues:{}, 
> maxPointsInLeafNode=752, maxMBSortInHeap=5.390190554185364, 
> sim=RandomSimilarity(queryNorm=true): {}, locale=cs, timezone=Europe/Tirane
>   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 1.8.0_77 
> (64-bit)/cpus=16,threads=1,free=255922760,total=336592896
>   2> NOTE: All tests run in this JVM: [TestIndexingPerformance, 
> TestPKIAuthenticationPlugin]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_121) - Build # 758 - Failure!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/758/
Java: 64bit/jdk1.8.0_121 -XX:+UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 68195 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: C:\Users\jenkins\AppData\Local\Temp\ecj1233919235
 [ecj-lint] Compiling 1044 source files to 
C:\Users\jenkins\AppData\Local\Temp\ecj1233919235
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\lib\org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\client\solrj\embedded\EmbeddedSolrServer.java
 (at line 203)
 [ecj-lint] new JavaBinCodec(resolver) {
 [ecj-lint] 
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocument(SolrDocument doc) {
 [ecj-lint]   callback.streamSolrDocument(doc);
 [ecj-lint]   //super.writeSolrDocument( doc, fields );
 [ecj-lint] }
 [ecj-lint] 
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocumentList(SolrDocumentList 
docs) throws IOException {
 [ecj-lint]   if (docs.size() > 0) {
 [ecj-lint] SolrDocumentList tmp = new SolrDocumentList();
 [ecj-lint] tmp.setMaxScore(docs.getMaxScore());
 [ecj-lint] tmp.setNumFound(docs.getNumFound());
 [ecj-lint] tmp.setStart(docs.getStart());
 [ecj-lint] docs = tmp;
 [ecj-lint]   }
 [ecj-lint]   callback.streamDocListInfo(docs.getNumFound(), 
docs.getStart(), docs.getMaxScore());
 [ecj-lint]   super.writeSolrDocumentList(docs);
 [ecj-lint] }
 [ecj-lint] 
 [ecj-lint]   }.setWritableDocFields(resolver). 
marshal(rsp.getValues(), out);
 [ecj-lint] 

 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\client\solrj\embedded\EmbeddedSolrServer.java
 (at line 227)
 [ecj-lint] return (NamedList) new 
JavaBinCodec(resolver).unmarshal(in);
 [ecj-lint]^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\solr\core\src\java\org\apache\solr\cloud\rule\ReplicaAssigner.java
 (at line 212)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 

[jira] [Commented] (SOLR-10134) EmbeddedSolrServer does not support SchemaAPI

2017-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889532#comment-15889532
 ] 

ASF subversion and git services commented on SOLR-10134:


Commit 0baf2fa33cef485df94649fd408c22e6430b68cf in lucene-solr's branch 
refs/heads/master from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0baf2fa ]

SOLR-10134: EmbeddedSolrServer handles SchemaAPI requests


> EmbeddedSolrServer does not support SchemaAPI
> -
>
> Key: SOLR-10134
> URL: https://issues.apache.org/jira/browse/SOLR-10134
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server, SolrJ
>Affects Versions: 6.4.1
>Reporter: Robert Alexandersson
>  Labels: test-driven
> Attachments: SOLR-10134.patch, SOLR-10134.patch, SOLR-10134.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The EmbeddedSolrServer server does not support calls to the POST methods of 
> SchemaAPI using SolRJ api. The reason is that the httpMethod param is never 
> set by the EmbeddedSolrServer#request(SolrRequest, String) and this is later 
> required by the SchemaHandler class that actually performs the call at 
> SchemaHandler#handleRequestBody(SolrQueryRequest, SolrQueryResponse). 
> Proposal is to enhance the EmbeddedSolrServer to forward the httpMethod at 
> aprox row 174 with the following: "req.getContext().put("httpMethod", 
> request.getMethod().name());". This change requires the Factory methods of 
> SolrJ to add the intended method to be used example : new 
> SchemaRequest.AddField() should append the POST method similar to how the 
> SchemaRequest.Field appends the GET method.
> I have written a separate EmbeddedSolrServer that replaces the one in SolR. 
> It works for now and fields can be created on the fly using the SchemaAPI of 
> the solrj client, but would like to be able to remove this workaround.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-master - Build # 1701 - Failure

2017-02-28 Thread Mikhail Khludnev
Thanks, Ishan!

On Wed, Mar 1, 2017 at 2:12 AM, Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> Pushed a fix for the precommit failure (8b4502c21842374b93336a88c3978c
> 0cc0afa205).
>
> On Wed, Mar 1, 2017 at 4:25 AM, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
>> Pushed a fix (0b7b1443c27c9d666a3cca8f683d4b19fbf9ce14) for the test
>> failure (caused by 2adc11c70af98feb8842f7349001374fb4785194).
>> Looking into the precommit issue.
>>
>> On Wed, Mar 1, 2017 at 3:56 AM, Ishan Chattopadhyaya <
>> ichattopadhy...@gmail.com> wrote:
>>
>>> Indeed, precommit is broken as well! Looking into it..
>>>
>>> On Wed, Mar 1, 2017 at 2:40 AM, Mikhail Khludnev 
>>> wrote:
>>>
 This test fails for me too, and the following breaks my precommit as
 well.. It's pity..

  [ecj-lint] 35. ERROR in /x1/jenkins/jenkins-slave/work
 space/Lucene-Solr-Tests-master/solr/core/src/java/org/apache
 /solr/store/blockcache/Metrics.java (at line 20)
  [ecj-lint] import java.util.Map;
  [ecj-lint]^
  [ecj-lint] The import java.util.Map is never used
  [ecj-lint] --
  [ecj-lint] 36. ERROR in /x1/jenkins/jenkins-slave/work
 space/Lucene-Solr-Tests-master/solr/core/src/java/org/apache
 /solr/store/blockcache/Metrics.java (at line 21)
  [ecj-lint] import java.util.Map.Entry;
  [ecj-lint]^^^
  [ecj-lint] The import java.util.Map.Entry is never used
  [ecj-lint] --
  [ecj-lint] 37. ERROR in /x1/jenkins/jenkins-slave/work
 space/Lucene-Solr-Tests-master/solr/core/src/java/org/apache
 /solr/store/blockcache/Metrics.java (at line 22)
  [ecj-lint] import java.util.concurrent.ConcurrentHashMap;
  [ecj-lint]^^
  [ecj-lint] The import java.util.concurrent.ConcurrentHashMap is never
 used
  [ecj-lint] --


 On Tue, Feb 28, 2017 at 11:48 PM, Apache Jenkins Server <
 jenk...@builds.apache.org> wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1701/
>
> 3 tests failed.
> FAILED:  org.apache.solr.update.UpdateLogTest.testApplyPartialUpdates
> AfterMultipleCommits
>
> Error Message:
>
>
> Stack Trace:
> java.lang.NullPointerException
> at __randomizedtesting.SeedInfo.s
> eed([D98614DB8241323C:E4F145CAB8886C60]:0)
> at org.apache.solr.update.UpdateL
> ogTest.ulogAdd(UpdateLogTest.java:255)
> at org.apache.solr.update.UpdateL
> ogTest.testApplyPartialUpdatesAfterMultipleCommits(UpdateLog
> Test.java:123)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccess
> orImpl.invoke(NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAc
> cessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at com.carrotsearch.randomizedtes
> ting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
> at com.carrotsearch.randomizedtes
> ting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
> at com.carrotsearch.randomizedtes
> ting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
> at com.carrotsearch.randomizedtes
> ting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
> at com.carrotsearch.randomizedtes
> ting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemProp
> ertiesRestoreRule.java:57)
> at org.apache.lucene.util.TestRul
> eSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at org.apache.lucene.util.Abstrac
> tBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at org.apache.lucene.util.TestRul
> eThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at org.apache.lucene.util.TestRul
> eIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFai
> lures.java:64)
> at org.apache.lucene.util.TestRul
> eMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at com.carrotsearch.randomizedtes
> ting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtes
> ting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> at com.carrotsearch.randomizedtes
> ting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
> at com.carrotsearch.randomizedtes
> ting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> at com.carrotsearch.randomizedtes
> ting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
> at com.carrotsearch.randomizedtes
> ting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
> at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_121) - Build # 6426 - Unstable!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6426/
Java: 64bit/jdk1.8.0_121 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.update.processor.TestNamedUpdateProcessors.test

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([BA341A55C0FE6FFE:3260258F6E020206]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at 
org.apache.solr.update.processor.TestNamedUpdateProcessors.test(TestNamedUpdateProcessors.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Updated] (SOLR-10219) diagnose HDFS test problems with Java9 and/or re-enable these tests

2017-02-28 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-10219:

Labels: Java9  (was: )

> diagnose HDFS test problems with Java9 and/or re-enable these tests
> ---
>
> Key: SOLR-10219
> URL: https://issues.apache.org/jira/browse/SOLR-10219
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>  Labels: Java9
>
> As part of SOLR-8874, Uwe disabled all HDFS based tests under java9 at the 
> build.xml/pom.xml level by adding a conditional to the existing 
> {{tests.disableHdfs}} system property (Note: this property exists so that 
> HDFS tests can be disabled by default on windows, but still run on cygwin if 
> users wish to set that property)
> We need to get to the bottom of what exactly the issue(s) are with HDFS and 
> file specific bugs to track the problems



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9516) New UI doesn't work when Kerberos is enabled

2017-02-28 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889437#comment-15889437
 ] 

Ishan Chattopadhyaya commented on SOLR-9516:


I can confirm that the exact setup works fine with old UI, but doesn't work 
with new UI. I'll reproduce and *try to* post logs. (When I saw this last time, 
I had no idea how to even copy logs)

[~sarkaramr...@gmail.com], would you have a chance to have a look at this 
issue, please? Given that you're actively working on the UI these days, and 
given my limited UI knowledge, I might need your help here.

> New UI doesn't work when Kerberos is enabled
> 
>
> Key: SOLR-9516
> URL: https://issues.apache.org/jira/browse/SOLR-9516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Ishan Chattopadhyaya
>  Labels: javascript, newdev, security
> Attachments: QQ20161012-0.png, Screenshot from 2016-09-15 07-36-29.png
>
>
> It seems resources like http://solr1:8983/solr/libs/chosen.jquery.js 
> encounter 403 error:
> {code}
> 2016-09-15 02:01:45.272 WARN  (qtp611437735-18) [   ] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: GSSException: 
> Failure unspecified at GSS-API level (Mechanism level: Request is a replay 
> (34))
> {code}
> The old UI is fine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10216) DIH: last_index_time not updated on if 0 docs updated

2017-02-28 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889402#comment-15889402
 ] 

Alexandre Rafalovitch commented on SOLR-10216:
--

Have you tested it against the latest Solr? There has been a LOT of changes 
related to the cloud in a meanwhile.

> DIH: last_index_time not updated on if 0 docs updated
> -
>
> Key: SOLR-10216
> URL: https://issues.apache.org/jira/browse/SOLR-10216
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - DataImportHandler
>Affects Versions: 5.5
>Reporter: jmlucjav
>Priority: Minor
>
> After getting our interval for calling delta index shorter and shorter, I
> have found out that last_index_time  in dataimport.properties is not
> updated every time the indexing runs, it is skipped if no docs where added.
> This happens at least in the following scenario:
> - running delta as full index
> ( /dataimport?command=full-import=false=true )
> - Solrcloud setup, so dataimport.properties is in zookeeper
> - Solr 5.5.0
> I understand skipping the commit on the index if no docs were updated is a
> nice optimization, but I believe the last_index_time info should be updated
> in all cases, so it reflects reality. We, for instance, are looking at this
> piece of information in order to do other stuff.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9530) Add an Atomic Update Processor

2017-02-28 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889400#comment-15889400
 ] 

Amrit Sarkar edited comment on SOLR-9530 at 3/1/17 2:54 AM:


Considering Noble's and Ishan's suggestions, cooked up a new patch with the 
following:

1. No solrconfig parameter(s) required for this URP now.

2. The URP will take inline parameters exactly as Noble mentioned:
{code}processor=Atomic_newfield=add=set_i=inc{code}

3. Both atomic and conventional updates as incoming documents to the URP are 
allowed.
   a. for atomic updates, the atomic operation in incoming doc should match 
with the parameters specified in processor call.
   e.g. {"id":"1","title":{"set":"A"}}  ||  processor=Atomic=set

4. After the conversion to atomic-style, latest _version_ will be added in the 
updated doc. If _version_, not present, send as it is.

5. if the update faces version conflict, retry by fetching latest _version_ 
from index, updating the SolrInputDoc. Maximum retries set to 5, hardcoded.

6. If the parameters are not sufficient to convert incoming document to 
atomic-style, abort the update.
   e.g {"id":"1","title":"A"}  ||  processor=Atomic=set
there is no point sending this document for update via URP

{noformat}
new file:   
solr/core/src/java/org/apache/solr/update/processor/AtomicUpdateProcessorFactory.java
new file:   
solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
{noformat}

Tried to write a test case for multiple threads executing URP simultaneously, 
but was not able to replicate the scenario exactly. The test-method is 
commented out in the patch.


was (Author: sarkaramr...@gmail.com):
Considering Noble's and Ishan's suggestions, cooked up a new patch with the 
following:

1. No solrconfig parameter(s) required for this URP now.

2. The URP will take inline parameters exactly as Noble mentioned:
{code}processor=Atomic_newfield=add=set_i=inc{code}

3. Both atomic and conventional updates as incoming documents to the URP are 
allowed.
   a. for atomic updates, the atomic operation in incoming doc should match 
with the parameters specified in processor call.
   e.g. {"id":"1","title":{"set":"A"}}  |  processor=Atomic=set

4. After the conversion to atomic-style, latest _version_ will be added in the 
updated doc. If _version_, not present, send as it is.

5. if the update faces version conflict, retry by fetching latest _version_ 
from index, updating the SolrInputDoc. Maximum retries set to 5, hardcoded.

6. If the parameters are not sufficient to convert incoming document to 
atomic-style, abort the update.
e.g. {"id":"1","title":"A"} | processor=Atomic=set
there is no point sending this document for update via URP

{noformat}
new file:   
solr/core/src/java/org/apache/solr/update/processor/AtomicUpdateProcessorFactory.java
new file:   
solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
{noformat}

Tried to write a test case for multiple threads executing URP simultaneously, 
but was not able to replicate the scenario exactly. The test-method is 
commented out in the patch.

> Add an Atomic Update Processor 
> ---
>
> Key: SOLR-9530
> URL: https://issues.apache.org/jira/browse/SOLR-9530
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-9530.patch, SOLR-9530.patch, SOLR-9530.patch, 
> SOLR-9530.patch
>
>
> I'd like to explore the idea of adding a new update processor to help ingest 
> partial updates.
> Example use-case - There are two datasets with a common id field. How can I 
> merge both of them at index time?
> Proposed Solution: 
> {code}
> 
>   
> add
>   
>   
>   
> 
> {code}
> So the first JSON dump could be ingested against 
> {{http://localhost:8983/solr/gettingstarted/update/json}}
> And then the second JSON could be ingested against
> {{http://localhost:8983/solr/gettingstarted/update/json?processor=atomic}}
> The Atomic Update Processor could support all the atomic update operations 
> currently supported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9530) Add an Atomic Update Processor

2017-02-28 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-9530:
---
Attachment: SOLR-9530.patch

Considering Noble's and Ishan's suggestions, cooked up a new patch with the 
following:

1. No solrconfig parameter(s) required for this URP now.

2. The URP will take inline parameters exactly as Noble mentioned:
{code}processor=Atomic_newfield=add=set_i=inc{code}

3. Both atomic and conventional updates as incoming documents to the URP are 
allowed.
   a. for atomic updates, the atomic operation in incoming doc should match 
with the parameters specified in processor call.
   e.g. {"id":"1","title":{"set":"A"}}  |  processor=Atomic=set

4. After the conversion to atomic-style, latest _version_ will be added in the 
updated doc. If _version_, not present, send as it is.

5. if the update faces version conflict, retry by fetching latest _version_ 
from index, updating the SolrInputDoc. Maximum retries set to 5, hardcoded.

6. If the parameters are not sufficient to convert incoming document to 
atomic-style, abort the update.
e.g. {"id":"1","title":"A"} | processor=Atomic=set
there is no point sending this document for update via URP

{noformat}
new file:   
solr/core/src/java/org/apache/solr/update/processor/AtomicUpdateProcessorFactory.java
new file:   
solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java
{noformat}

Tried to write a test case for multiple threads executing URP simultaneously, 
but was not able to replicate the scenario exactly. The test-method is 
commented out in the patch.

> Add an Atomic Update Processor 
> ---
>
> Key: SOLR-9530
> URL: https://issues.apache.org/jira/browse/SOLR-9530
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-9530.patch, SOLR-9530.patch, SOLR-9530.patch, 
> SOLR-9530.patch
>
>
> I'd like to explore the idea of adding a new update processor to help ingest 
> partial updates.
> Example use-case - There are two datasets with a common id field. How can I 
> merge both of them at index time?
> Proposed Solution: 
> {code}
> 
>   
> add
>   
>   
>   
> 
> {code}
> So the first JSON dump could be ingested against 
> {{http://localhost:8983/solr/gettingstarted/update/json}}
> And then the second JSON could be ingested against
> {{http://localhost:8983/solr/gettingstarted/update/json?processor=atomic}}
> The Atomic Update Processor could support all the atomic update operations 
> currently supported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9516) New UI doesn't work when Kerberos is enabled

2017-02-28 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889399#comment-15889399
 ] 

Alexandre Rafalovitch commented on SOLR-9516:
-

I am looking at: 
https://cwiki.apache.org/confluence/display/solr/Kerberos+Authentication+Plugin
It says:
{quote}
In order for your browser to access the Solr Admin UI after enabling Kerberos 
authentication, it must be able to negotiate with the Kerberos authenticator 
service to allow you access. Each browser supports this differently, and some 
(like Chrome) do not support it at all. If you see 401 errors when trying to 
access the Solr Admin UI after enabling Kerberos authentication, it's likely 
your browser has not been configured properly to know how or where to negotiate 
the authentication request.

Detailed information on how to set up your browser is beyond the scope of this 
documentation; please see your system administrators for Kerberos for details 
on how to configure your browser.
{quote}

Are we - absolutely - sure that the exact same setup works with the old UI? 
Could we get full browser/network traces for a request made from old UI and 
from New UI? Preferably while the backend is actually running with full TRACE 
log.

> New UI doesn't work when Kerberos is enabled
> 
>
> Key: SOLR-9516
> URL: https://issues.apache.org/jira/browse/SOLR-9516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Ishan Chattopadhyaya
>  Labels: javascript, newdev, security
> Attachments: QQ20161012-0.png, Screenshot from 2016-09-15 07-36-29.png
>
>
> It seems resources like http://solr1:8983/solr/libs/chosen.jquery.js 
> encounter 403 error:
> {code}
> 2016-09-15 02:01:45.272 WARN  (qtp611437735-18) [   ] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: GSSException: 
> Failure unspecified at GSS-API level (Mechanism level: Request is a replay 
> (34))
> {code}
> The old UI is fine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10215) Cannot use the namenode for HDFS HA as of Solr 6.4

2017-02-28 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889395#comment-15889395
 ] 

Kevin Risden commented on SOLR-10215:
-

I can confirm that 6.4.1 doesn't work with HDFS NameNode HA. 6.3.0 works just 
fine. The nightly build of 6.5.0 from 
https://builds.apache.org/job/Solr-Artifacts-6.x/lastSuccessfulBuild/artifact/solr/package/solr-6.5.0-254.tgz
 works as well.

My testing setup: https://github.com/risdenk/solr_hdfs_ha_docker

This works pretty well on 32GB of ram with AWS. I was using something similar 
to this:
{code}
docker-machine create --driver amazonec2 --amazonec2-region us-west-2 
--amazonec2-request-spot-instance --amazonec2-spot-price 0.50 
--amazonec2-root-size 50 --amazonec2-instance-type m4.2xlarge aws01
eval $(docker-machine env aws01)
./run.sh
{code}

> Cannot use the namenode for HDFS HA as of Solr 6.4
> --
>
> Key: SOLR-10215
> URL: https://issues.apache.org/jira/browse/SOLR-10215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.4.1, 6.4.0
>Reporter: Cassandra Targett
>Priority: Blocker
> Fix For: 6.4.2
>
>
> As of Solr 6.4, it seems it's no longer possible to use a namenode instead of 
> a server address with the {{solr.hdfs.home}} parameter when configuring Solr 
> with HDFS high availability (HA).
> Startup is fine, but when trying to create a collection, this error is in the 
> logs:
> {code}
> 2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1  
> x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core 
> [testing_shard1_replica1]: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> org.apache.solr.common.SolrException: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> {code}
> And after the full stack trace (which I will put in a comment), there is this:
> {code}
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> mycluster
> {code}
> I started Solr with the params configured as system params instead of in 
> {{solrconfig.xml}}, so my {{solr.in.sh}} has this:
> {code}
> SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS 
> -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
> -Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index 
> -Dsolr.hdfs.confdir=/etc/hadoop/conf/"
> {code}
> Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP 
> 2.5).
> I tried with a couple variations of defining the Solr home parameter:
> * {{hdfs://mycluster:8020/solr-index}}
> * {{hdfs://mycluster/solr-index}}
> * {{solr-index}}
> None of these variations worked with Solr 6.4.1 (the first 2 got the same 
> error as above, the last was just wrong so it got a different error).
> I believe this problem is isolated to Solr 6.4.x. I tested the same setup (as 
> in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server 
> address also works fine, but that negates the High Availability feature 
> (which is like failover, for those who don't know).
> _edit: the problem isn't just 6.4.1, I believe it's probably in 6.4.0 also_



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10219) diagnose HDFS test problems with Java9 and/or re-enable these tests

2017-02-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889360#comment-15889360
 ] 

Hoss Man commented on SOLR-10219:
-

The specific commits...

* 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blobdiff;f=solr/common-build.xml;h=d2672587edc11c535dd4d10ec30af200731bd3f2;hp=78e10aabac17f1aeaffd6f97d1bbf53fd6085360;hb=f93f90c;hpb=cc774994fc0faa3711f762b3c51b4d011739f628
* https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3a4e1d1

I've been manually experimenting running (nightly) tests on java9 (build 
9-ea+157) with {{tests.disableHdfs=false}} and I've yet to encounter any test 
failures.

I suspect maybe this change was a mistake?  perhaps what Uwe was seeing was a 
lot of failures realted to classes in hadoops java package due to other bugs 
like SOLR-8876 & SOLR-10199 -- which don't always use HDFS, and now have their 
own {{assumeFalse(JRE_IS_MINIMUM_JAVA9)}} logic.

[~thetaphi] -- do you remember the specific motivations for the 
{{tests.disableHdfs=true if java9}} change?  do any tests fail for you on your 
java9 install if you revert that?

> diagnose HDFS test problems with Java9 and/or re-enable these tests
> ---
>
> Key: SOLR-10219
> URL: https://issues.apache.org/jira/browse/SOLR-10219
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>
> As part of SOLR-8874, Uwe disabled all HDFS based tests under java9 at the 
> build.xml/pom.xml level by adding a conditional to the existing 
> {{tests.disableHdfs}} system property (Note: this property exists so that 
> HDFS tests can be disabled by default on windows, but still run on cygwin if 
> users wish to set that property)
> We need to get to the bottom of what exactly the issue(s) are with HDFS and 
> file specific bugs to track the problems



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10219) diagnose HDFS test problems with Java9 and/or re-enable these tests

2017-02-28 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10219:
---

 Summary: diagnose HDFS test problems with Java9 and/or re-enable 
these tests
 Key: SOLR-10219
 URL: https://issues.apache.org/jira/browse/SOLR-10219
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man
Assignee: Hoss Man


As part of SOLR-8874, Uwe disabled all HDFS based tests under java9 at the 
build.xml/pom.xml level by adding a conditional to the existing 
{{tests.disableHdfs}} system property (Note: this property exists so that HDFS 
tests can be disabled by default on windows, but still run on cygwin if users 
wish to set that property)

We need to get to the bottom of what exactly the issue(s) are with HDFS and 
file specific bugs to track the problems



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_121) - Build # 2967 - Still Failing!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2967/
Java: 64bit/jdk1.8.0_121 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 68153 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj334502563
 [ecj-lint] Compiling 1044 source files to /tmp/ecj334502563
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 203)
 [ecj-lint] new JavaBinCodec(resolver) {
 [ecj-lint] 
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocument(SolrDocument doc) {
 [ecj-lint]   callback.streamSolrDocument(doc);
 [ecj-lint]   //super.writeSolrDocument( doc, fields );
 [ecj-lint] }
 [ecj-lint] 
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocumentList(SolrDocumentList 
docs) throws IOException {
 [ecj-lint]   if (docs.size() > 0) {
 [ecj-lint] SolrDocumentList tmp = new SolrDocumentList();
 [ecj-lint] tmp.setMaxScore(docs.getMaxScore());
 [ecj-lint] tmp.setNumFound(docs.getNumFound());
 [ecj-lint] tmp.setStart(docs.getStart());
 [ecj-lint] docs = tmp;
 [ecj-lint]   }
 [ecj-lint]   callback.streamDocListInfo(docs.getNumFound(), 
docs.getStart(), docs.getMaxScore());
 [ecj-lint]   super.writeSolrDocumentList(docs);
 [ecj-lint] }
 [ecj-lint] 
 [ecj-lint]   }.setWritableDocFields(resolver). 
marshal(rsp.getValues(), out);
 [ecj-lint] 

 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 227)
 [ecj-lint] return (NamedList) new 
JavaBinCodec(resolver).unmarshal(in);
 [ecj-lint]^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 212)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 212)
 [ecj-lint] 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 696 - Failure!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/696/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 68111 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /var/tmp/ecj329277519
 [ecj-lint] Compiling 1044 source files to /var/tmp/ecj329277519
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet/jars/org.restlet-2.3.0.jar
 [ecj-lint] invalid Class-Path header in manifest of jar file: 
/export/home/jenkins/.ivy2/cache/org.restlet.jee/org.restlet.ext.servlet/jars/org.restlet.ext.servlet-2.3.0.jar
 [ecj-lint] --
 [ecj-lint] 1. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 203)
 [ecj-lint] new JavaBinCodec(resolver) {
 [ecj-lint] 
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocument(SolrDocument doc) {
 [ecj-lint]   callback.streamSolrDocument(doc);
 [ecj-lint]   //super.writeSolrDocument( doc, fields );
 [ecj-lint] }
 [ecj-lint] 
 [ecj-lint] @Override
 [ecj-lint] public void writeSolrDocumentList(SolrDocumentList 
docs) throws IOException {
 [ecj-lint]   if (docs.size() > 0) {
 [ecj-lint] SolrDocumentList tmp = new SolrDocumentList();
 [ecj-lint] tmp.setMaxScore(docs.getMaxScore());
 [ecj-lint] tmp.setNumFound(docs.getNumFound());
 [ecj-lint] tmp.setStart(docs.getStart());
 [ecj-lint] docs = tmp;
 [ecj-lint]   }
 [ecj-lint]   callback.streamDocListInfo(docs.getNumFound(), 
docs.getStart(), docs.getMaxScore());
 [ecj-lint]   super.writeSolrDocumentList(docs);
 [ecj-lint] }
 [ecj-lint] 
 [ecj-lint]   }.setWritableDocFields(resolver). 
marshal(rsp.getValues(), out);
 [ecj-lint] 

 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] 2. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/client/solrj/embedded/EmbeddedSolrServer.java
 (at line 227)
 [ecj-lint] return (NamedList) new 
JavaBinCodec(resolver).unmarshal(in);
 [ecj-lint]^^
 [ecj-lint] Resource leak: '' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 5. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/Assign.java
 (at line 101)
 [ecj-lint] Collections.sort(shardIdNames, (o1, o2) -> {
 [ecj-lint]^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 6. WARNING in 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/core/src/java/org/apache/solr/cloud/rule/ReplicaAssigner.java
 (at line 212)
 [ecj-lint] Collections.sort(sortedLiveNodes, (n1, n2) -> {
 [ecj-lint]   ^^^
 [ecj-lint] (Recovered) Internal inconsistency detected during lambda shape 
analysis
 [ecj-lint] --
 [ecj-lint] 7. WARNING in 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1162 - Failure!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1162/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesWithDelete

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([C4F3B7CF753963FF:8F4E8F3341E6F6C6]:0)
at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.java:255)
at 
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesWithDelete(UpdateLogTest.java:174)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesDependingOnNonAddShouldThrowException

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([C4F3B7CF753963FF:939C3F90B7153C90]:0)
at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.java:255)
at 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 713 - Failure

2017-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/713/

No tests ran.

Build Log:
[...truncated 39745 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 260 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (27.5 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 30.6 MB in 0.03 sec (1214.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 65.0 MB in 0.05 sec (1201.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 75.4 MB in 0.07 sec (1085.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6204 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6204 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 214 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (270.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 40.3 MB in 0.04 sec (1057.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 141.7 MB in 0.13 sec (1085.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 143.0 MB in 0.13 sec (1102.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]  
   [smoker] Started Solr server on port 8983 (pid=28472). Happy searching!
   

Re: [JENKINS] Lucene-Solr-Tests-master - Build # 1701 - Failure

2017-02-28 Thread Ishan Chattopadhyaya
Pushed a fix for the precommit failure
(8b4502c21842374b93336a88c3978c0cc0afa205).

On Wed, Mar 1, 2017 at 4:25 AM, Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> Pushed a fix (0b7b1443c27c9d666a3cca8f683d4b19fbf9ce14) for the test
> failure (caused by 2adc11c70af98feb8842f7349001374fb4785194).
> Looking into the precommit issue.
>
> On Wed, Mar 1, 2017 at 3:56 AM, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
>> Indeed, precommit is broken as well! Looking into it..
>>
>> On Wed, Mar 1, 2017 at 2:40 AM, Mikhail Khludnev  wrote:
>>
>>> This test fails for me too, and the following breaks my precommit as
>>> well.. It's pity..
>>>
>>>  [ecj-lint] 35. ERROR in /x1/jenkins/jenkins-slave/work
>>> space/Lucene-Solr-Tests-master/solr/core/src/java/org/apache
>>> /solr/store/blockcache/Metrics.java (at line 20)
>>>  [ecj-lint] import java.util.Map;
>>>  [ecj-lint]^
>>>  [ecj-lint] The import java.util.Map is never used
>>>  [ecj-lint] --
>>>  [ecj-lint] 36. ERROR in /x1/jenkins/jenkins-slave/work
>>> space/Lucene-Solr-Tests-master/solr/core/src/java/org/apache
>>> /solr/store/blockcache/Metrics.java (at line 21)
>>>  [ecj-lint] import java.util.Map.Entry;
>>>  [ecj-lint]^^^
>>>  [ecj-lint] The import java.util.Map.Entry is never used
>>>  [ecj-lint] --
>>>  [ecj-lint] 37. ERROR in /x1/jenkins/jenkins-slave/work
>>> space/Lucene-Solr-Tests-master/solr/core/src/java/org/apache
>>> /solr/store/blockcache/Metrics.java (at line 22)
>>>  [ecj-lint] import java.util.concurrent.ConcurrentHashMap;
>>>  [ecj-lint]^^
>>>  [ecj-lint] The import java.util.concurrent.ConcurrentHashMap is never
>>> used
>>>  [ecj-lint] --
>>>
>>>
>>> On Tue, Feb 28, 2017 at 11:48 PM, Apache Jenkins Server <
>>> jenk...@builds.apache.org> wrote:
>>>
 Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1701/

 3 tests failed.
 FAILED:  org.apache.solr.update.UpdateLogTest.testApplyPartialUpdates
 AfterMultipleCommits

 Error Message:


 Stack Trace:
 java.lang.NullPointerException
 at __randomizedtesting.SeedInfo.seed([D98614DB8241323C:E4F145CA
 B8886C60]:0)
 at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.j
 ava:255)
 at org.apache.solr.update.UpdateLogTest.testApplyPartialUpdates
 AfterMultipleCommits(UpdateLogTest.java:123)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
 ssorImpl.java:62)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
 thodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498)
 at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(R
 andomizedRunner.java:1713)
 at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evalua
 te(RandomizedRunner.java:907)
 at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evalua
 te(RandomizedRunner.java:943)
 at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evalu
 ate(RandomizedRunner.java:957)
 at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRes
 toreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
 at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evalua
 te(TestRuleSetupTeardownChained.java:49)
 at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(Ab
 stractBeforeAfterRule.java:45)
 at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(
 TestRuleThreadAndTestName.java:48)
 at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.eval
 uate(TestRuleIgnoreAfterMaxFailures.java:64)
 at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRu
 leMarkFailure.java:47)
 at com.carrotsearch.randomizedtesting.rules.StatementAdapter.ev
 aluate(StatementAdapter.java:36)
 at com.carrotsearch.randomizedtesting.ThreadLeakControl$Stateme
 ntRunner.run(ThreadLeakControl.java:368)
 at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTim
 eoutingTask(ThreadLeakControl.java:817)
 at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evalu
 ate(ThreadLeakControl.java:468)
 at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingl
 eTest(RandomizedRunner.java:916)
 at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evalua
 te(RandomizedRunner.java:802)
 at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evalua
 te(RandomizedRunner.java:852)
 at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evalua
 te(RandomizedRunner.java:863)
 at 

[jira] [Updated] (SOLR-10215) Cannot use the namenode for HDFS HA as of Solr 6.4

2017-02-28 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-10215:
-
Affects Version/s: 6.4.0

> Cannot use the namenode for HDFS HA as of Solr 6.4
> --
>
> Key: SOLR-10215
> URL: https://issues.apache.org/jira/browse/SOLR-10215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.4.1, 6.4.0
>Reporter: Cassandra Targett
>Priority: Blocker
> Fix For: 6.4.2
>
>
> As of Solr 6.4, it seems it's no longer possible to use a namenode instead of 
> a server address with the {{solr.hdfs.home}} parameter when configuring Solr 
> with HDFS high availability (HA).
> Startup is fine, but when trying to create a collection, this error is in the 
> logs:
> {code}
> 2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1  
> x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core 
> [testing_shard1_replica1]: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> org.apache.solr.common.SolrException: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> {code}
> And after the full stack trace (which I will put in a comment), there is this:
> {code}
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> mycluster
> {code}
> I started Solr with the params configured as system params instead of in 
> {{solrconfig.xml}}, so my {{solr.in.sh}} has this:
> {code}
> SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS 
> -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
> -Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index 
> -Dsolr.hdfs.confdir=/etc/hadoop/conf/"
> {code}
> Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP 
> 2.5).
> I tried with a couple variations of defining the Solr home parameter:
> * {{hdfs://mycluster:8020/solr-index}}
> * {{hdfs://mycluster/solr-index}}
> * {{solr-index}}
> None of these variations worked with Solr 6.4.1 (the first 2 got the same 
> error as above, the last was just wrong so it got a different error).
> I believe this problem is isolated to Solr 6.4.x. I tested the same setup (as 
> in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server 
> address also works fine, but that negates the High Availability feature 
> (which is like failover, for those who don't know).
> _edit: the problem isn't just 6.4.1, I believe it's probably in 6.4.0 also_



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_121) - Build # 2966 - Failure!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2966/
Java: 32bit/jdk1.8.0_121 -server -XX:+UseParallelGC

3 tests failed.
FAILED:  
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesOnMultipleInPlaceUpdatesInSequence

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([F83C9738731CA199:361B61518A8C7DF1]:0)
at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.java:255)
at 
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesOnMultipleInPlaceUpdatesInSequence(UpdateLogTest.java:76)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesAfterMultipleCommits

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([F83C9738731CA199:C54BC62949D5FFC5]:0)
at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.java:255)
  

Re: [JENKINS] Lucene-Solr-Tests-master - Build # 1701 - Failure

2017-02-28 Thread Ishan Chattopadhyaya
Pushed a fix (0b7b1443c27c9d666a3cca8f683d4b19fbf9ce14) for the test
failure (caused by 2adc11c70af98feb8842f7349001374fb4785194).
Looking into the precommit issue.

On Wed, Mar 1, 2017 at 3:56 AM, Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> Indeed, precommit is broken as well! Looking into it..
>
> On Wed, Mar 1, 2017 at 2:40 AM, Mikhail Khludnev  wrote:
>
>> This test fails for me too, and the following breaks my precommit as
>> well.. It's pity..
>>
>>  [ecj-lint] 35. ERROR in /x1/jenkins/jenkins-slave/work
>> space/Lucene-Solr-Tests-master/solr/core/src/java/org/apache
>> /solr/store/blockcache/Metrics.java (at line 20)
>>  [ecj-lint] import java.util.Map;
>>  [ecj-lint]^
>>  [ecj-lint] The import java.util.Map is never used
>>  [ecj-lint] --
>>  [ecj-lint] 36. ERROR in /x1/jenkins/jenkins-slave/work
>> space/Lucene-Solr-Tests-master/solr/core/src/java/org/apache
>> /solr/store/blockcache/Metrics.java (at line 21)
>>  [ecj-lint] import java.util.Map.Entry;
>>  [ecj-lint]^^^
>>  [ecj-lint] The import java.util.Map.Entry is never used
>>  [ecj-lint] --
>>  [ecj-lint] 37. ERROR in /x1/jenkins/jenkins-slave/work
>> space/Lucene-Solr-Tests-master/solr/core/src/java/org/apache
>> /solr/store/blockcache/Metrics.java (at line 22)
>>  [ecj-lint] import java.util.concurrent.ConcurrentHashMap;
>>  [ecj-lint]^^
>>  [ecj-lint] The import java.util.concurrent.ConcurrentHashMap is never
>> used
>>  [ecj-lint] --
>>
>>
>> On Tue, Feb 28, 2017 at 11:48 PM, Apache Jenkins Server <
>> jenk...@builds.apache.org> wrote:
>>
>>> Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1701/
>>>
>>> 3 tests failed.
>>> FAILED:  org.apache.solr.update.UpdateLogTest.testApplyPartialUpdates
>>> AfterMultipleCommits
>>>
>>> Error Message:
>>>
>>>
>>> Stack Trace:
>>> java.lang.NullPointerException
>>> at __randomizedtesting.SeedInfo.seed([D98614DB8241323C:E4F145CA
>>> B8886C60]:0)
>>> at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.j
>>> ava:255)
>>> at org.apache.solr.update.UpdateLogTest.testApplyPartialUpdates
>>> AfterMultipleCommits(UpdateLogTest.java:123)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>>> ssorImpl.java:62)
>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>>> thodAccessorImpl.java:43)
>>> at java.lang.reflect.Method.invoke(Method.java:498)
>>> at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(R
>>> andomizedRunner.java:1713)
>>> at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evalua
>>> te(RandomizedRunner.java:907)
>>> at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evalua
>>> te(RandomizedRunner.java:943)
>>> at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evalu
>>> ate(RandomizedRunner.java:957)
>>> at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRes
>>> toreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>>> at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evalua
>>> te(TestRuleSetupTeardownChained.java:49)
>>> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(Ab
>>> stractBeforeAfterRule.java:45)
>>> at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(
>>> TestRuleThreadAndTestName.java:48)
>>> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.eval
>>> uate(TestRuleIgnoreAfterMaxFailures.java:64)
>>> at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRu
>>> leMarkFailure.java:47)
>>> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.ev
>>> aluate(StatementAdapter.java:36)
>>> at com.carrotsearch.randomizedtesting.ThreadLeakControl$Stateme
>>> ntRunner.run(ThreadLeakControl.java:368)
>>> at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTim
>>> eoutingTask(ThreadLeakControl.java:817)
>>> at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evalu
>>> ate(ThreadLeakControl.java:468)
>>> at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingl
>>> eTest(RandomizedRunner.java:916)
>>> at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evalua
>>> te(RandomizedRunner.java:802)
>>> at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evalua
>>> te(RandomizedRunner.java:852)
>>> at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evalua
>>> te(RandomizedRunner.java:863)
>>> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.ev
>>> aluate(StatementAdapter.java:36)
>>> at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRes
>>> toreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>>> at 

[jira] [Comment Edited] (SOLR-10215) Cannot use the namenode for HDFS HA as of Solr 6.4

2017-02-28 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889012#comment-15889012
 ] 

Cassandra Targett edited comment on SOLR-10215 at 2/28/17 10:46 PM:


With Hoss' kind assistance, I did a git bisect on the commits between 
releases/solr/6.4.0 and releases/solr/6.3.0 and traced when this went bad to:

{code}
c40cd2df49c80aee1ab2b6fea634191edc8b944f is the first bad commit
commit c40cd2df49c80aee1ab2b6fea634191edc8b944f
Author: Andrzej Bialecki 
Date:   Tue Jan 3 14:58:07 2017 +0100

   SOLR-9854: Collect metrics for index merges and index store IO (squashed).

:04 04 a3fd94768739f287b9afb9186cbf37f870080e86 
23b7bba0a00be03b8a453d8fdd2fd586e8f36441 Msolr
bisect run success
{code}

However, the good news appears to be that this is already fixed already in 
branch_6_4; I'm going to assume SOLR-10182 fixes it.

Tomorrow morning I'll do a build of that branch locally and check it again, but 
am going to resolve this as fixed for now.


was (Author: ctargett):
With Hoss' kind assistance, I did a git bisect on the commits between 
releases/solr/6_4 and releases/solr/6_3 and traced when this went bad to:

{code}
c40cd2df49c80aee1ab2b6fea634191edc8b944f is the first bad commit
commit c40cd2df49c80aee1ab2b6fea634191edc8b944f
Author: Andrzej Bialecki 
Date:   Tue Jan 3 14:58:07 2017 +0100

   SOLR-9854: Collect metrics for index merges and index store IO (squashed).

:04 04 a3fd94768739f287b9afb9186cbf37f870080e86 
23b7bba0a00be03b8a453d8fdd2fd586e8f36441 Msolr
bisect run success
{code}

However, the good news appears to be that this is already fixed already in 
branch_6_4; I'm going to assume SOLR-10182 fixes it.

Tomorrow morning I'll do a build of that branch locally and check it again, but 
am going to resolve this as fixed for now.

> Cannot use the namenode for HDFS HA as of Solr 6.4
> --
>
> Key: SOLR-10215
> URL: https://issues.apache.org/jira/browse/SOLR-10215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.4.1
>Reporter: Cassandra Targett
>Priority: Blocker
> Fix For: 6.4.2
>
>
> As of Solr 6.4, it seems it's no longer possible to use a namenode instead of 
> a server address with the {{solr.hdfs.home}} parameter when configuring Solr 
> with HDFS high availability (HA).
> Startup is fine, but when trying to create a collection, this error is in the 
> logs:
> {code}
> 2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1  
> x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core 
> [testing_shard1_replica1]: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> org.apache.solr.common.SolrException: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> {code}
> And after the full stack trace (which I will put in a comment), there is this:
> {code}
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> mycluster
> {code}
> I started Solr with the params configured as system params instead of in 
> {{solrconfig.xml}}, so my {{solr.in.sh}} has this:
> {code}
> SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS 
> -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
> -Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index 
> -Dsolr.hdfs.confdir=/etc/hadoop/conf/"
> {code}
> Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP 
> 2.5).
> I tried with a couple variations of defining the Solr home parameter:
> * {{hdfs://mycluster:8020/solr-index}}
> * {{hdfs://mycluster/solr-index}}
> * {{solr-index}}
> None of these variations worked with Solr 6.4.1 (the first 2 got the same 
> error as above, the last was just wrong so it got a different error).
> I believe this problem is isolated to Solr 6.4.x. I tested the same setup (as 
> in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server 
> address also works fine, but that negates the High Availability feature 
> (which is like failover, for those who don't know).
> _edit: the problem isn't just 6.4.1, I believe it's probably in 6.4.0 also_



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10215) Cannot use the namenode for HDFS HA as of Solr 6.4

2017-02-28 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-10215.
--
Resolution: Fixed

> Cannot use the namenode for HDFS HA as of Solr 6.4
> --
>
> Key: SOLR-10215
> URL: https://issues.apache.org/jira/browse/SOLR-10215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.4.1
>Reporter: Cassandra Targett
>Priority: Blocker
> Fix For: 6.4.2
>
>
> As of Solr 6.4, it seems it's no longer possible to use a namenode instead of 
> a server address with the {{solr.hdfs.home}} parameter when configuring Solr 
> with HDFS high availability (HA).
> Startup is fine, but when trying to create a collection, this error is in the 
> logs:
> {code}
> 2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1  
> x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core 
> [testing_shard1_replica1]: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> org.apache.solr.common.SolrException: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> {code}
> And after the full stack trace (which I will put in a comment), there is this:
> {code}
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> mycluster
> {code}
> I started Solr with the params configured as system params instead of in 
> {{solrconfig.xml}}, so my {{solr.in.sh}} has this:
> {code}
> SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS 
> -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
> -Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index 
> -Dsolr.hdfs.confdir=/etc/hadoop/conf/"
> {code}
> Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP 
> 2.5).
> I tried with a couple variations of defining the Solr home parameter:
> * {{hdfs://mycluster:8020/solr-index}}
> * {{hdfs://mycluster/solr-index}}
> * {{solr-index}}
> None of these variations worked with Solr 6.4.1 (the first 2 got the same 
> error as above, the last was just wrong so it got a different error).
> I believe this problem is isolated to Solr 6.4.x. I tested the same setup (as 
> in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server 
> address also works fine, but that negates the High Availability feature 
> (which is like failover, for those who don't know).
> _edit: the problem isn't just 6.4.1, I believe it's probably in 6.4.0 also_



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10215) Cannot use the namenode for HDFS HA as of Solr 6.4

2017-02-28 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15889012#comment-15889012
 ] 

Cassandra Targett commented on SOLR-10215:
--

With Hoss' kind assistance, I did a git bisect on the commits between 
releases/solr/6_4 and releases/solr/6_3 and traced when this went bad to:

{code}
c40cd2df49c80aee1ab2b6fea634191edc8b944f is the first bad commit
commit c40cd2df49c80aee1ab2b6fea634191edc8b944f
Author: Andrzej Bialecki 
Date:   Tue Jan 3 14:58:07 2017 +0100

   SOLR-9854: Collect metrics for index merges and index store IO (squashed).

:04 04 a3fd94768739f287b9afb9186cbf37f870080e86 
23b7bba0a00be03b8a453d8fdd2fd586e8f36441 Msolr
bisect run success
{code}

However, the good news appears to be that this is already fixed already in 
branch_6_4; I'm going to assume SOLR-10182 fixes it.

Tomorrow morning I'll do a build of that branch locally and check it again, but 
am going to resolve this as fixed for now.

> Cannot use the namenode for HDFS HA as of Solr 6.4
> --
>
> Key: SOLR-10215
> URL: https://issues.apache.org/jira/browse/SOLR-10215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.4.1
>Reporter: Cassandra Targett
>Priority: Blocker
> Fix For: 6.4.2
>
>
> As of Solr 6.4, it seems it's no longer possible to use a namenode instead of 
> a server address with the {{solr.hdfs.home}} parameter when configuring Solr 
> with HDFS high availability (HA).
> Startup is fine, but when trying to create a collection, this error is in the 
> logs:
> {code}
> 2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1  
> x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core 
> [testing_shard1_replica1]: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> org.apache.solr.common.SolrException: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> {code}
> And after the full stack trace (which I will put in a comment), there is this:
> {code}
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> mycluster
> {code}
> I started Solr with the params configured as system params instead of in 
> {{solrconfig.xml}}, so my {{solr.in.sh}} has this:
> {code}
> SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS 
> -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
> -Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index 
> -Dsolr.hdfs.confdir=/etc/hadoop/conf/"
> {code}
> Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP 
> 2.5).
> I tried with a couple variations of defining the Solr home parameter:
> * {{hdfs://mycluster:8020/solr-index}}
> * {{hdfs://mycluster/solr-index}}
> * {{solr-index}}
> None of these variations worked with Solr 6.4.1 (the first 2 got the same 
> error as above, the last was just wrong so it got a different error).
> I believe this problem is isolated to Solr 6.4.x. I tested the same setup (as 
> in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server 
> address also works fine, but that negates the High Availability feature 
> (which is like failover, for those who don't know).
> _edit: the problem isn't just 6.4.1, I believe it's probably in 6.4.0 also_



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-master - Build # 1701 - Failure

2017-02-28 Thread Ishan Chattopadhyaya
Indeed, precommit is broken as well! Looking into it..

On Wed, Mar 1, 2017 at 2:40 AM, Mikhail Khludnev  wrote:

> This test fails for me too, and the following breaks my precommit as
> well.. It's pity..
>
>  [ecj-lint] 35. ERROR in /x1/jenkins/jenkins-slave/work
> space/Lucene-Solr-Tests-master/solr/core/src/java/org/apache
> /solr/store/blockcache/Metrics.java (at line 20)
>  [ecj-lint] import java.util.Map;
>  [ecj-lint]^
>  [ecj-lint] The import java.util.Map is never used
>  [ecj-lint] --
>  [ecj-lint] 36. ERROR in /x1/jenkins/jenkins-slave/work
> space/Lucene-Solr-Tests-master/solr/core/src/java/org/apache
> /solr/store/blockcache/Metrics.java (at line 21)
>  [ecj-lint] import java.util.Map.Entry;
>  [ecj-lint]^^^
>  [ecj-lint] The import java.util.Map.Entry is never used
>  [ecj-lint] --
>  [ecj-lint] 37. ERROR in /x1/jenkins/jenkins-slave/work
> space/Lucene-Solr-Tests-master/solr/core/src/java/org/apache
> /solr/store/blockcache/Metrics.java (at line 22)
>  [ecj-lint] import java.util.concurrent.ConcurrentHashMap;
>  [ecj-lint]^^
>  [ecj-lint] The import java.util.concurrent.ConcurrentHashMap is never
> used
>  [ecj-lint] --
>
>
> On Tue, Feb 28, 2017 at 11:48 PM, Apache Jenkins Server <
> jenk...@builds.apache.org> wrote:
>
>> Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1701/
>>
>> 3 tests failed.
>> FAILED:  org.apache.solr.update.UpdateLogTest.testApplyPartialUpdates
>> AfterMultipleCommits
>>
>> Error Message:
>>
>>
>> Stack Trace:
>> java.lang.NullPointerException
>> at __randomizedtesting.SeedInfo.seed([D98614DB8241323C:E4F145CA
>> B8886C60]:0)
>> at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.
>> java:255)
>> at org.apache.solr.update.UpdateLogTest.testApplyPartialUpdates
>> AfterMultipleCommits(UpdateLogTest.java:123)
>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcce
>> ssorImpl.java:62)
>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMe
>> thodAccessorImpl.java:43)
>> at java.lang.reflect.Method.invoke(Method.java:498)
>> at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(R
>> andomizedRunner.java:1713)
>> at com.carrotsearch.randomizedtesting.RandomizedRunner$8.
>> evaluate(RandomizedRunner.java:907)
>> at com.carrotsearch.randomizedtesting.RandomizedRunner$9.
>> evaluate(RandomizedRunner.java:943)
>> at com.carrotsearch.randomizedtesting.RandomizedRunner$10.
>> evaluate(RandomizedRunner.java:957)
>> at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRes
>> toreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>> at org.apache.lucene.util.TestRuleSetupTeardownChained$1.
>> evaluate(TestRuleSetupTeardownChained.java:49)
>> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(Ab
>> stractBeforeAfterRule.java:45)
>> at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(
>> TestRuleThreadAndTestName.java:48)
>> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.
>> evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>> at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(
>> TestRuleMarkFailure.java:47)
>> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.
>> evaluate(StatementAdapter.java:36)
>> at com.carrotsearch.randomizedtesting.ThreadLeakControl$Stateme
>> ntRunner.run(ThreadLeakControl.java:368)
>> at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTim
>> eoutingTask(ThreadLeakControl.java:817)
>> at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.
>> evaluate(ThreadLeakControl.java:468)
>> at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingl
>> eTest(RandomizedRunner.java:916)
>> at com.carrotsearch.randomizedtesting.RandomizedRunner$5.
>> evaluate(RandomizedRunner.java:802)
>> at com.carrotsearch.randomizedtesting.RandomizedRunner$6.
>> evaluate(RandomizedRunner.java:852)
>> at com.carrotsearch.randomizedtesting.RandomizedRunner$7.
>> evaluate(RandomizedRunner.java:863)
>> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.
>> evaluate(StatementAdapter.java:36)
>> at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRes
>> toreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(Ab
>> stractBeforeAfterRule.java:45)
>> at com.carrotsearch.randomizedtesting.rules.StatementAdapter.
>> evaluate(StatementAdapter.java:36)
>> at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(Tes
>> tRuleStoreClassName.java:41)
>> at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverri
>> 

[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 727 - Failure!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/727/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesAfterMultipleCommits

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([8779FE59C673E40E:BA0EAF48FCBABA52]:0)
at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.java:255)
at 
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesAfterMultipleCommits(UpdateLogTest.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesWithDelete

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([8779FE59C673E40E:CCC4C6A5F2AC7137]:0)
at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.java:255)
at 

[jira] [Commented] (SOLR-9467) Document Transformer to Remove Fields

2017-02-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888964#comment-15888964
 ] 

David Smiley commented on SOLR-9467:


If {{useDocValuesAsStored}} was generalized to a hypothetical 
{{excludeFromWildcardRetrieval}} (damn it's hard to come up with good names for 
this!) then it could not only work as it does now but also for stored fields.  
To me that feels better than removing the stored fields as a doc transformer 
because by that time, all that text has wound up in the document cache already 
-- and you don't always want that.  See: SOLR-10117   But sometimes you do, 
granted.  Another reason besides highlighting to want large fields _out_ of the 
doc cache is to support atomic updates.

BTW a field that one might want to remove from {{fl=*}} would almost certainly 
_not_ have docvalues, so I think a performance discussion about the comparison 
isn't applicable.

> Document Transformer to Remove Fields
> -
>
> Key: SOLR-9467
> URL: https://issues.apache.org/jira/browse/SOLR-9467
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.2
>Reporter: Gus Heck
> Attachments: SOLR-9467.patch, SOLR-9467.patch
>
>
> Given that SOLR-3191 has become bogged down and inactive, evidently stuck in 
> low level details, and since I have wished several times for some way to just 
> get that one big field out of my results to improve transfer times without 
> making a big brittle list of all my other fields. I'd like to propose a 
> DocumentTransformer that accomplishes this.
> It would look something like this:
> {code}=*,[fl.rm v="title"]{code} 
> Since removing one field with a known name is probably the most common case 
> I'd like to start by keeping this simple, and if further features like globs 
> or lists of fields are desired, subsequent Jira tickets can be opened to add 
> them. Not attached to specifics here, only looking to keep things simple and 
> solve the key use case. If you don't like fl.rm as a name for a transformer, 
> suggest a better one (for example). 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 760 - Failure

2017-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/760/

3 tests failed.
FAILED:  
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesDependingOnNonAddShouldThrowException

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([279F5A4142422273:70F0D21E806E7D1C]:0)
at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.java:255)
at 
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesDependingOnNonAddShouldThrowException(UpdateLogTest.java:147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesOnMultipleInPlaceUpdatesInSequence

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([279F5A4142422273:E9B8AC28BBD2FE1B]:0)
at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.java:255)
at 

Re: [JENKINS] Lucene-Solr-Tests-master - Build # 1701 - Failure

2017-02-28 Thread Ishan Chattopadhyaya
The UpdateLogTest seems to be failing lately. Looking into why this could
be...

On Wed, Mar 1, 2017 at 2:18 AM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1701/
>
> 3 tests failed.
> FAILED:  org.apache.solr.update.UpdateLogTest.
> testApplyPartialUpdatesAfterMultipleCommits
>
> Error Message:
>
>
> Stack Trace:
> java.lang.NullPointerException
> at __randomizedtesting.SeedInfo.seed([D98614DB8241323C:
> E4F145CAB8886C60]:0)
> at org.apache.solr.update.UpdateLogTest.ulogAdd(
> UpdateLogTest.java:255)
> at org.apache.solr.update.UpdateLogTest.
> testApplyPartialUpdatesAfterMultipleCommits(UpdateLogTest.java:123)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(
> RandomizedRunner.java:1713)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(
> RandomizedRunner.java:907)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(
> RandomizedRunner.java:943)
> at com.carrotsearch.randomizedtesting.
> RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
> at com.carrotsearch.randomizedtesting.rules.
> SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.
> java:57)
> at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(
> TestRuleSetupTeardownChained.java:49)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:45)
> at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(
> TestRuleThreadAndTestName.java:48)
> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures
> $1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at org.apache.lucene.util.TestRuleMarkFailure$1.
> evaluate(TestRuleMarkFailure.java:47)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl$
> StatementRunner.run(ThreadLeakControl.java:368)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl.
> forkTimeoutingTask(ThreadLeakControl.java:817)
> at com.carrotsearch.randomizedtesting.
> ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.
> runSingleTest(RandomizedRunner.java:916)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(
> RandomizedRunner.java:802)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(
> RandomizedRunner.java:852)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(
> RandomizedRunner.java:863)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.rules.
> SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.
> java:57)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:45)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(
> TestRuleStoreClassName.java:41)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1.evaluate(
> NoShadowingOrOverridesOnMethodsRule.java:40)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1.evaluate(
> NoShadowingOrOverridesOnMethodsRule.java:40)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(
> TestRuleAssertionsRequired.java:53)
> at org.apache.lucene.util.TestRuleMarkFailure$1.
> evaluate(TestRuleMarkFailure.java:47)
> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures
> $1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(
> TestRuleIgnoreTestSuites.java:54)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl$
> StatementRunner.run(ThreadLeakControl.java:368)
> at java.lang.Thread.run(Thread.java:745)
>
>
> FAILED:  

[jira] [Commented] (LUCENE-7717) UnifiedHighlighter don't work with russian PrefixQuery

2017-02-28 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888945#comment-15888945
 ] 

Ishan Chattopadhyaya commented on LUCENE-7717:
--

[~dsmiley], I don't mind including this fix, if you think this is a low risk 
fix and should be included. Feel free to backport this one to the release 
branch. I'm anyway waiting for SOLR-10215 as of now.

> UnifiedHighlighter don't work with russian PrefixQuery
> --
>
> Key: LUCENE-7717
> URL: https://issues.apache.org/jira/browse/LUCENE-7717
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 6.3, 6.4.1
>Reporter: Dmitry Malinin
>Assignee: David Smiley
> Attachments: LUCENE-7717.patch, LUCENE-7717.patch
>
>
> UnifiedHighlighter highlighter = new UnifiedHighlighter(null, new 
> StandardAnalyzer());
> Query query = new PrefixQuery(new Term("title", "я"));
> String testData = "я";
> Object snippet = highlighter.highlightWithoutSearcher(fieldName, query, 
> testData, 1);
> System.out.printf("testData=[%s] Query=%s snippet=[%s]\n", testData, query, 
> snippet==null?null:snippet.toString());



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_121) - Build # 19074 - Failure!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19074/
Java: 32bit/jdk1.8.0_121 -server -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.solr.update.TestInPlaceUpdatesStandalone.testUpdatingDocValues

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([5F50400405F3509A:89209DA36EAE2ECC]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.update.TestInPlaceUpdatesStandalone.testUpdatingDocValues(TestInPlaceUpdatesStandalone.java:179)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesAfterMultipleCommits

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([5F50400405F3509A:622711153F3A0EC6]:0)
at 

[jira] [Updated] (SOLR-10218) The Schema API "replace-field-type" does not generate the SolrParameters for a SimilarityFactory correctly

2017-02-28 Thread Benjamin Deininger (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benjamin Deininger updated SOLR-10218:
--
Description: 
When sending a JSON POST to the Schema API to replace a field type, the 
following JSON does not pass the SolrParameters properly to the 
BM25SimilarityFactory.  

{code:javascript}
{"replace-field-type":{"name":"tint","class":"solr.TrieIntField","positionIncrementGap":"0","precisionStep":"8","similarity":{"class":"solr.BM25SimilarityFactory","k1":1.25,"b":0.75}}}
{code}

The `appendAttrs` function in the FieldTypeXmlAdapter parses k1 and b into 
attributes instead of children.  
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/rest/schema/FieldTypeXmlAdapter.java#L155

{code:xml}

{code}

Based on the XML examples for similarity, this should actually be the following 
:

{code:xml}

 0.1
 0.1

{code}

The similarities block in JSON should be handled differently so that the XML is 
generated appropriately.

{code:java}
protected static Element appendSimilarityAttrs(Document doc, Element elm, 
Map json) {
String clazz = (String) json.get("class");
elm.setAttribute("class", clazz);
json.remove("class");

for (Map.Entry entry : json.entrySet()) {
Object val = entry.getValue();
if (val != null && !(val instanceof Map)) {
Element element = 
doc.createElement(val.getClass().getSimpleName().toLowerCase());
element.setAttribute("name", entry.getKey());
element.setTextContent(entry.getValue().toString());
elm.appendChild(element);
}
}
return elm;
}
{code}





  was:
When sending a JSON POST to the Schema API to replace a field type, the 
following JSON does not pass the SolrParameters properly to the 
BM25SimilarityFactory.  

{code:javascript}
{"replace-field-type":{"name":"tint","class":"solr.TrieIntField","positionIncrementGap":"0","precisionStep":"8","similarity":{"class":"solr.BM25SimilarityFactory","k1":1.25,"b":0.75}}}
{code}

The `appendAttrs` function in the FieldTypeXmlAdapter parses k1 and b into 
attributes instead of children.  
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/rest/schema/FieldTypeXmlAdapter.java#L155

{code:xml}

{code}

Based on the XML examples for similarity, this should actually be the following 
:

{code:xml}

 0.1
 0.1

{code}

The similarities block in JSON should be handled differently so that the XML is 
generated appropriately.

{code:java}
protected static Element appendSimilarityAttrs(Document doc, Element elm, 
Map json) {
String clazz = (String) json.get("class");
elm.setAttribute("class", clazz);
json.remove("class");

for (Map.Entry entry : json.entrySet()) {
Object val = entry.getValue();
if (val != null && !(val instanceof Map)) {
Element element = 
doc.createElement(val.getClass().getSimpleName().toLowerCase());
element.setAttribute("name", entry.getKey());
element.setTextContent(entry.getValue().toString());
elm.appendChild(element);
}
}
return elm;
}
{code}






> The Schema API "replace-field-type" does not generate the SolrParameters for 
> a SimilarityFactory correctly
> --
>
> Key: SOLR-10218
> URL: https://issues.apache.org/jira/browse/SOLR-10218
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Affects Versions: 6.4.1
>Reporter: Benjamin Deininger
>Priority: Minor
>
> When sending a JSON POST to the Schema API to replace a field type, the 
> following JSON does not pass the SolrParameters properly to the 
> BM25SimilarityFactory.  
> {code:javascript}
> {"replace-field-type":{"name":"tint","class":"solr.TrieIntField","positionIncrementGap":"0","precisionStep":"8","similarity":{"class":"solr.BM25SimilarityFactory","k1":1.25,"b":0.75}}}
> {code}
> The `appendAttrs` function in the FieldTypeXmlAdapter parses k1 and b into 
> attributes instead of children.  
> https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/rest/schema/FieldTypeXmlAdapter.java#L155
> {code:xml}
>  class="org.apache.lucene.search.similarities.BM25Similarity" k1="1.25"/>
> {code}
> Based on the XML examples for similarity, this should actually be the 
> following :
> {code:xml}
> 
>  0.1
>  0.1
> 
> {code}
> The similarities block in JSON should be handled differently so that the XML 
> is generated appropriately.
> 

[jira] [Created] (SOLR-10218) The Schema API "replace-field-type" does not generate the SolrParameters for a SimilarityFactory correctly

2017-02-28 Thread Benjamin Deininger (JIRA)
Benjamin Deininger created SOLR-10218:
-

 Summary: The Schema API "replace-field-type" does not generate the 
SolrParameters for a SimilarityFactory correctly
 Key: SOLR-10218
 URL: https://issues.apache.org/jira/browse/SOLR-10218
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Schema and Analysis
Affects Versions: 6.4.1
Reporter: Benjamin Deininger
Priority: Minor


When sending a JSON POST to the Schema API to replace a field type, the 
following JSON does not pass the SolrParameters properly to the 
BM25SimilarityFactory.  

{code:javascript}
{"replace-field-type":{"name":"tint","class":"solr.TrieIntField","positionIncrementGap":"0","precisionStep":"8","similarity":{"class":"solr.BM25SimilarityFactory","k1":1.25,"b":0.75}}}
{code}

The `appendAttrs` function in the FieldTypeXmlAdapter parses k1 and b into 
attributes instead of children.  
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/rest/schema/FieldTypeXmlAdapter.java#L155

{code:xml}

{code}

Based on the XML examples for similarity, this should actually be the following 
:

{code:xml}

 0.1
 0.1

{code}

The similarities block in JSON should be handled differently so that the XML is 
generated appropriately.

{code:java}
protected static Element appendSimilarityAttrs(Document doc, Element elm, 
Map json) {
String clazz = (String) json.get("class");
elm.setAttribute("class", clazz);
json.remove("class");

for (Map.Entry entry : json.entrySet()) {
Object val = entry.getValue();
if (val != null && !(val instanceof Map)) {
Element element = 
doc.createElement(val.getClass().getSimpleName().toLowerCase());
element.setAttribute("name", entry.getKey());
element.setTextContent(entry.getValue().toString());
elm.appendChild(element);
}
}
return elm;
}
{code}







--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-master - Build # 1701 - Failure

2017-02-28 Thread Mikhail Khludnev
This test fails for me too, and the following breaks my precommit as well..
It's pity..

 [ecj-lint] 35. ERROR in /x1/jenkins/jenkins-slave/
workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/
apache/solr/store/blockcache/Metrics.java (at line 20)
 [ecj-lint] import java.util.Map;
 [ecj-lint]^
 [ecj-lint] The import java.util.Map is never used
 [ecj-lint] --
 [ecj-lint] 36. ERROR in /x1/jenkins/jenkins-slave/
workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/
apache/solr/store/blockcache/Metrics.java (at line 21)
 [ecj-lint] import java.util.Map.Entry;
 [ecj-lint]^^^
 [ecj-lint] The import java.util.Map.Entry is never used
 [ecj-lint] --
 [ecj-lint] 37. ERROR in /x1/jenkins/jenkins-slave/
workspace/Lucene-Solr-Tests-master/solr/core/src/java/org/
apache/solr/store/blockcache/Metrics.java (at line 22)
 [ecj-lint] import java.util.concurrent.ConcurrentHashMap;
 [ecj-lint]^^
 [ecj-lint] The import java.util.concurrent.ConcurrentHashMap is never used
 [ecj-lint] --


On Tue, Feb 28, 2017 at 11:48 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1701/
>
> 3 tests failed.
> FAILED:  org.apache.solr.update.UpdateLogTest.
> testApplyPartialUpdatesAfterMultipleCommits
>
> Error Message:
>
>
> Stack Trace:
> java.lang.NullPointerException
> at __randomizedtesting.SeedInfo.seed([D98614DB8241323C:
> E4F145CAB8886C60]:0)
> at org.apache.solr.update.UpdateLogTest.ulogAdd(
> UpdateLogTest.java:255)
> at org.apache.solr.update.UpdateLogTest.
> testApplyPartialUpdatesAfterMultipleCommits(UpdateLogTest.java:123)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(
> RandomizedRunner.java:1713)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(
> RandomizedRunner.java:907)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(
> RandomizedRunner.java:943)
> at com.carrotsearch.randomizedtesting.
> RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
> at com.carrotsearch.randomizedtesting.rules.
> SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.
> java:57)
> at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(
> TestRuleSetupTeardownChained.java:49)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:45)
> at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(
> TestRuleThreadAndTestName.java:48)
> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures
> $1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at org.apache.lucene.util.TestRuleMarkFailure$1.
> evaluate(TestRuleMarkFailure.java:47)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl$
> StatementRunner.run(ThreadLeakControl.java:368)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl.
> forkTimeoutingTask(ThreadLeakControl.java:817)
> at com.carrotsearch.randomizedtesting.
> ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.
> runSingleTest(RandomizedRunner.java:916)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(
> RandomizedRunner.java:802)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(
> RandomizedRunner.java:852)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(
> RandomizedRunner.java:863)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.rules.
> SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.
> java:57)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:45)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(
> TestRuleStoreClassName.java:41)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1.evaluate(
> NoShadowingOrOverridesOnMethodsRule.java:40)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1.evaluate(
> NoShadowingOrOverridesOnMethodsRule.java:40)
> at 

[jira] [Commented] (LUCENE-7717) UnifiedHighlighter don't work with russian PrefixQuery

2017-02-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1549#comment-1549
 ] 

David Smiley commented on LUCENE-7717:
--

IntelliJ IDEA has clued me into this else-if having dead code paths for a long 
while now here and I'm kicking myself for putting it off -- LOL.

> UnifiedHighlighter don't work with russian PrefixQuery
> --
>
> Key: LUCENE-7717
> URL: https://issues.apache.org/jira/browse/LUCENE-7717
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 6.3, 6.4.1
>Reporter: Dmitry Malinin
>Assignee: David Smiley
> Attachments: LUCENE-7717.patch, LUCENE-7717.patch
>
>
> UnifiedHighlighter highlighter = new UnifiedHighlighter(null, new 
> StandardAnalyzer());
> Query query = new PrefixQuery(new Term("title", "я"));
> String testData = "я";
> Object snippet = highlighter.highlightWithoutSearcher(fieldName, query, 
> testData, 1);
> System.out.printf("testData=[%s] Query=%s snippet=[%s]\n", testData, query, 
> snippet==null?null:snippet.toString());



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1701 - Failure

2017-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1701/

3 tests failed.
FAILED:  
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesAfterMultipleCommits

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([D98614DB8241323C:E4F145CAB8886C60]:0)
at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.java:255)
at 
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesAfterMultipleCommits(UpdateLogTest.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesDependingOnNonAddShouldThrowException

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([D98614DB8241323C:8EE99C84406D6D53]:0)
at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.java:255)
at 

[jira] [Commented] (LUCENE-7712) SimpleQueryString should support auto fuziness

2017-02-28 Thread Lee Hinman (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1538#comment-1538
 ] 

Lee Hinman commented on LUCENE-7712:


I am happy to submit a patch to add this, however, I don't know what the auto 
value should be. I wasn't able to find it except in older (3.x) documentation 
that mentioned it may be 0.5, is that the correct value for fuzziness that 
should be used if there is no value specified?

> SimpleQueryString should support auto fuziness
> --
>
> Key: LUCENE-7712
> URL: https://issues.apache.org/jira/browse/LUCENE-7712
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/queryparser
>Reporter: David Pilato
>
> Apparently the simpleQueryString query does not support auto fuziness as the 
> query string does.
> So {{foo:bar~1}} works for both simple query string and query string queries.
> But {{foo:bar~}} works for query string query but not for simple query string 
> query.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7717) UnifiedHighlighter don't work with russian PrefixQuery

2017-02-28 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-7717:
-
Attachment: LUCENE-7717.patch

Here's a patch. It fixed the MultiTermHighlighting class in both the 
{{postingshighlight}} package as well as {{uhighlight}}.  It adds a test method 
to {{TestUnifiedHighlighterMTQ}}.  I also beefed up the test for a related 
method {{testWhichMTQMatched}} to avoid potential inadvertent changes to the 
CharRunAutomata toString that people might depend on.  It appears there was no 
breakage in this case but until I added more query types, wether it did or 
didn't break wasn't apparent.

> UnifiedHighlighter don't work with russian PrefixQuery
> --
>
> Key: LUCENE-7717
> URL: https://issues.apache.org/jira/browse/LUCENE-7717
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 6.3, 6.4.1
>Reporter: Dmitry Malinin
>Assignee: David Smiley
> Attachments: LUCENE-7717.patch, LUCENE-7717.patch
>
>
> UnifiedHighlighter highlighter = new UnifiedHighlighter(null, new 
> StandardAnalyzer());
> Query query = new PrefixQuery(new Term("title", "я"));
> String testData = "я";
> Object snippet = highlighter.highlightWithoutSearcher(fieldName, query, 
> testData, 1);
> System.out.printf("testData=[%s] Query=%s snippet=[%s]\n", testData, query, 
> snippet==null?null:snippet.toString());



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3862 - Failure!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3862/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter

Error Message:
Collection not found: withShardField

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: withShardField
at 
__randomizedtesting.SeedInfo.seed([BAF4CA14766CAF97:EFA42286DA956067]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.getCollectionNames(CloudSolrClient.java:1379)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:1072)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:1042)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:160)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:232)
at 
org.apache.solr.cloud.CustomCollectionTest.testRouteFieldForImplicitRouter(CustomCollectionTest.java:141)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Commented] (LUCENE-7717) UnifiedHighlighter don't work with russian PrefixQuery

2017-02-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1507#comment-1507
 ] 

David Smiley commented on LUCENE-7717:
--

Here's my take on it:  The UnifiedHighlighter (and PostingsHighlighter from 
which it derives) processes the MultiTermQueries (e.g. wildcards) in the query 
and creates multiple {{CharacterRunAutomaton}} intended to match the same 
things.  {{CharacterRunAutomaton}} takes a {{Automaton}} as input, and when it 
does it's processing, it matches the Character code points (integers from 0 to 
0x10) against the integers in the Automaton.  However, this strategy 
assumes that the Automaton was constructed based on character code points.  But 
{{AutomatonQuery.getAutomaton}} is intended to match byte by byte (integers 0 
to 255).  {{PrefixQuery.toAutomaton}} will get 2 bytes for the the "я" in 
BytesRef form, and add 2 states.  This does not line up with the assumptions of 
CharacterRunAutomaton.

A short term immediate "fix" is simply to put AutomatonQuery last in the 
if-else list as Dmitry indicated.  As such, PrefixQuery will work again.  This 
was broken by LUCENE-6367 (Lucene 5.1).  TermRangeQuery, which also now extends 
AutomatonQuery, will likewise work -- broken by LUCENE-5879 (Lucene 5.2).  
Again, back when MultiTermHighlighting was first written, neither of those 
queries extended AutomatonQuery.  _But there will be bugs for other types of 
AutomatonQuery (namely WildcardQuery and RegexpQuery) that have yet to be 
reported._

[~rcmuir] or [~mikemccand] I wonder if you have any thoughts on how to fix 
this.  An idea I have is to _not_ use a CharacterRunAutomaton in the 
UnifiedHighlighter; use a ByteRunAutomaton instead.  Then, add 
{{ByteRunAutomaton.run(char[] ...etc)}} that converts each character to the 
equivalent UTF8 bytes to match.  Even with that, I wonder if this points to 
areas to improve the automata API so that people don't bump into this trap in 
the future.  For example, maybe have the Automata self-report if it's byte 
oriented, Unicode codepoint oriented, or something custom.  Then, RunAutomaton 
could throw an exception if there is a mis-match.  However that would be a 
runtime error; maybe the Automata could be typed.

Any way, what I'd like to do is do a short term fix that addresses many common 
cases and the title of this issue.  And then do a more thorough fix in a 
follow-on issue.  [~ichattopadhyaya] do you think this could go into 6.4.2 or 
are you only looking for "critical" issues?  It's debatable what's critical and 
not.  This bug has been around since 5.1 so perhaps it isn't.

(a patch will follow shortly)


> UnifiedHighlighter don't work with russian PrefixQuery
> --
>
> Key: LUCENE-7717
> URL: https://issues.apache.org/jira/browse/LUCENE-7717
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 6.3, 6.4.1
>Reporter: Dmitry Malinin
>Assignee: David Smiley
> Attachments: LUCENE-7717.patch
>
>
> UnifiedHighlighter highlighter = new UnifiedHighlighter(null, new 
> StandardAnalyzer());
> Query query = new PrefixQuery(new Term("title", "я"));
> String testData = "я";
> Object snippet = highlighter.highlightWithoutSearcher(fieldName, query, 
> testData, 1);
> System.out.printf("testData=[%s] Query=%s snippet=[%s]\n", testData, query, 
> snippet==null?null:snippet.toString());



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+158) - Build # 2965 - Unstable!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2965/
Java: 32bit/jdk-9-ea+158 -server -XX:+UseParallelGC

4 tests failed.
FAILED:  org.apache.solr.update.TestInPlaceUpdatesDistrib.test

Error Message:
'sanitycheck' results against client: 
org.apache.solr.client.solrj.impl.HttpSolrClient@d7f022 (not leader) wrong 
[docid] for SolrDocument{id=0, 
id_field_copy_that_does_not_support_in_place_update_s=0, title_s=title0, 
id_i=0, inplace_updatable_float=101.0, _version_=1560606232593563648, 
inplace_updatable_int_with_default=666, 
inplace_updatable_float_with_default=42.0, [docid]=8042} expected:<11780> but 
was:<8042>

Stack Trace:
java.lang.AssertionError: 'sanitycheck' results against client: 
org.apache.solr.client.solrj.impl.HttpSolrClient@d7f022 (not leader) wrong 
[docid] for SolrDocument{id=0, 
id_field_copy_that_does_not_support_in_place_update_s=0, title_s=title0, 
id_i=0, inplace_updatable_float=101.0, _version_=1560606232593563648, 
inplace_updatable_int_with_default=666, 
inplace_updatable_float_with_default=42.0, [docid]=8042} expected:<11780> but 
was:<8042>
at 
__randomizedtesting.SeedInfo.seed([1F51BF83FA7C0A0B:97058059548067F3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.assertDocIdsAndValuesInResults(TestInPlaceUpdatesDistrib.java:442)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.assertDocIdsAndValuesAgainstAllClients(TestInPlaceUpdatesDistrib.java:413)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.docValuesUpdateTest(TestInPlaceUpdatesDistrib.java:321)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:140)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:547)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

Re: Need to modify boolean AND search

2017-02-28 Thread Nilesh Kamani
I tried the autoRelax future, but it does not serve the purpose. The
autoRelax is applied when words are removed from query field due to
stopwords, etc
So it is applied before the results are fetched.
In my case, if no results are found, I want best possible results.
So if I search for +A +B +C +D +E +F... +Z) and no documents found with all
phrases, I want best possible result. Let's say a document with +A +C +F
(max number of phrases found in a document).

On Tue, Feb 28, 2017 at 12:37 PM, Nilesh Kamani 
wrote:

> Hello All,
>
> I want to modify a boolean AND search.
> Just to give an example.
> If somebody searches for +A +B +C, but if there is no document which
> contains all three phrases, it should return the least +A +B or +A +C.
> Could you please tell me which classes will I need to modify for this ?
>
>
> Thanks,
> Nilesh Kamani
>


[jira] [Commented] (LUCENE-7619) Add WordDelimiterGraphFilter

2017-02-28 Thread Jigar Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888747#comment-15888747
 ] 

Jigar Shah commented on LUCENE-7619:


Hello [~mikemccand]

+1 

Many thanks for fixing this!

I am using WordDelemeterFilter (which often breaks phrase queries on words with 
puntuations). I am currently using Lucene 6.4.1 in production. Can you please 
suggest which classes I should patch on Lucene 6.4.1 to use this feature. 
Patching just WordDelimiterGraphFilter and using it in token stream instead of 
WordDelemeterFilter be fine? or there are any other dependent classes which I 
have to patch (please provide list if there are other classes too) ? 

Once Lucene 6.5 is released i will upgrade to Lucene 6.5 so i will get better 
tested fix, but for now i would like to patch Lucene 6.4.1 if patch is 
compitible and simple.

> Add WordDelimiterGraphFilter
> 
>
> Key: LUCENE-7619
> URL: https://issues.apache.org/jira/browse/LUCENE-7619
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: master (7.0), 6.5
>
> Attachments: after.png, before.png, LUCENE-7619.patch, 
> LUCENE-7619.patch, LUCENE-7619.patch
>
>
> Currently, {{WordDelimiterFilter}} doesn't try to set the {{posLen}} 
> attribute and so it creates graphs like this:
> !before.png!
> but with this patch (still a work in progress) it creates this graph instead:
> !after.png!
> This means (today) positional queries when using WDF at search time are 
> buggy, but since we fixed LUCENE-7603, with this change here you should be 
> able to use positional queries with WDGF.
> I'm also trying to produce holes properly (removes logic from the current WDF 
> that swallows a hole when whole token is just delimiters).
> Surprisingly, it's actually quite easy to tweak WDF to create a graph (unlike 
> e.g. {{SynonymGraphFilter}}) because it's already creating the necessary new 
> positions, and its output graph never has side paths, except for single 
> tokens that skip nodes because they have {{posLen > 1}}.  I.e. the only fix 
> to make, I think, is to set {{posLen}} properly.  And it really helps that it 
> does its own "new token buffering + sorting" already.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9467) Document Transformer to Remove Fields

2017-02-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888721#comment-15888721
 ] 

Erick Erickson commented on SOLR-9467:
--

re: lazy field loading. Does it really matter enough to care about? With no 
option to _not_ compress the stored data, and given that the stored data is 
compressed on a document basis, even loading one field is enough to cause most 
of the work to be done.

Although contrariwise, with all the "use doc values as stored" stuff I can 
argue with myself that suppressing loading the fields first a-la 3191 would 
result in measurable savings since you could arrange things so that 
fl=*=all_non_dv_fields would avoid any decompression.

BTW, the stall on 3191 is mostly that I never seem to have the time I think I 
do.

> Document Transformer to Remove Fields
> -
>
> Key: SOLR-9467
> URL: https://issues.apache.org/jira/browse/SOLR-9467
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SearchComponents - other
>Affects Versions: 6.2
>Reporter: Gus Heck
> Attachments: SOLR-9467.patch, SOLR-9467.patch
>
>
> Given that SOLR-3191 has become bogged down and inactive, evidently stuck in 
> low level details, and since I have wished several times for some way to just 
> get that one big field out of my results to improve transfer times without 
> making a big brittle list of all my other fields. I'd like to propose a 
> DocumentTransformer that accomplishes this.
> It would look something like this:
> {code}=*,[fl.rm v="title"]{code} 
> Since removing one field with a known name is probably the most common case 
> I'd like to start by keeping this simple, and if further features like globs 
> or lists of fields are desired, subsequent Jira tickets can be opened to add 
> them. Not attached to specifics here, only looking to keep things simple and 
> solve the key use case. If you don't like fl.rm as a name for a transformer, 
> suggest a better one (for example). 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7715) Simplify NearSpansUnordered

2017-02-28 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888703#comment-15888703
 ] 

Paul Elschot commented on LUCENE-7715:
--

bq. ... how it deals with the initial state that all sub spans have a start 
position of -1.

There is no need for that, the intermediate data structure is a priority queue 
that is not a Spans itself.

If the names of this priority queue (SpanTotalLengthEndPositionWindow) and its 
methods (startDocument/nextPosition) are misleading, they need to be improved.

The core search tests and precommit pass.


> Simplify NearSpansUnordered
> ---
>
> Key: LUCENE-7715
> URL: https://issues.apache.org/jira/browse/LUCENE-7715
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: master (7.0)
>Reporter: Paul Elschot
>Priority: Minor
> Attachments: LUCENE-7715.patch
>
>
> {code}
> git diff --stat master...
>  .../spans/NearSpansUnordered.java   | 211 -
>  1 file changed, 59 insertions(+), 152 deletions(-)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 275 - Failure

2017-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/275/

No tests ran.

Build Log:
[...truncated 39750 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 260 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (18.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.5.0-src.tgz...
   [smoker] 30.7 MB in 0.03 sec (1182.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.5.0.tgz...
   [smoker] 65.2 MB in 0.06 sec (1163.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.5.0.zip...
   [smoker] 75.6 MB in 0.06 sec (1176.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.5.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6236 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.5.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6236 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.5.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 229 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (283.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.5.0-src.tgz...
   [smoker] 40.4 MB in 0.04 sec (1005.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.5.0.tgz...
   [smoker] 134.7 MB in 0.12 sec (1143.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.5.0.zip...
   [smoker] 135.9 MB in 0.12 sec (1144.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.5.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.5.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]  
   [smoker] Started Solr server on port 8983 (pid=10752). Happy searching!
   [smoker] 
   [smoker] 

Re: 6.4.2 release?

2017-02-28 Thread Ishan Chattopadhyaya
FYI. A new blocker bug was identified, SOLR-10215, and I'm holding off on
the RC until this one is addressed.

On Mon, Feb 27, 2017 at 8:57 PM, Christine Poerschke (BLOOMBERG/ LONDON) <
cpoersc...@bloomberg.net> wrote:

> https://issues.apache.org/jira/browse/SOLR-10192 copy/paste fix just
> backported to branch_6_4 but happy to revert if there were to be any
> concerns about it.
>
> Thanks,
> Christine
>
> From: ichattopadhy...@gmail.com At: 02/21/17 18:05:35
> To: dev@lucene.apache.org
> Cc: Christine Poerschke (BLOOMBERG/ LONDON)
> Subject: Re: 6.4.2 release?
>
> I would like to volunteer for this 6.4.2 release. Planning to cut a RC as
> soon as blockers are resolved.
> One of the unresolved blocker issues seems to be LUCENE-7698 (I'll take a
> look to see if there are more). If there are more issues that should be
> part of the release, please let me know or mark as blockers in jira.
>
> Thanks,
> Ishan
>
>
> On Thu, Feb 16, 2017 at 3:48 AM, Adrien Grand  wrote:
>
>> I had initially planned on releasing tomorrow but the mirrors replicated
>> faster than I had thought they would so I finished the release today,
>> including the addition of the new 5.5.4 indices for backward testing so I
>> am good with proceeding with a new release now.
>>
>> Le mer. 15 févr. 2017 à 16:13, Adrien Grand  a écrit :
>>
>> +1
>>
>> One ask I have is to wait for the 5.5.4 release process to be complete so
>> that branch_6_4 has the 5.5.4 backward indices when we cut the first RC. I
>> will let you know when I am done.
>>
>> Le mer. 15 févr. 2017 à 15:53, Christine Poerschke (BLOOMBERG/ LONDON) <
>> cpoersc...@bloomberg.net> a écrit :
>>
>> Hi,
>>
>> These two could be minor candidates for inclusion:
>>
>> * https://issues.apache.org/jira/browse/SOLR-10083
>> Fix instanceof check in ConstDoubleSource.equals
>>
>> * https://issues.apache.org/jira/browse/LUCENE-7676
>> FilterCodecReader to override more super-class methods
>>
>> The former had narrowly missed the 6.4.1 release.
>>
>> Regards,
>>
>> Christine
>>
>> From: dev@lucene.apache.org At: 02/15/17 14:27:52
>> To: dev@lucene.apache.org
>> Subject: Re:6.4.2 release?
>>
>> Hi devs,
>>
>> These two issues seem serious enough to warrant a new release from
>> branch_6_4:
>> * SOLR-10130: Serious performance degradation in Solr 6.4.1 due to the
>> new metrics collection
>> * SOLR-10138: Transaction log replay can hit an NPE due to new Metrics
>> code.
>>
>> What do you think? Anything else that should go there?
>>
>> ---
>> Best regards,
>>
>> Andrzej Bialecki
>>
>>
>


[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+158) - Build # 19073 - Still Unstable!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19073/
Java: 64bit/jdk-9-ea+158 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesWithDelete

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([BE0FD42F5CF3873C:F5B2ECD3682C1205]:0)
at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.java:255)
at 
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesWithDelete(UpdateLogTest.java:174)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:547)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.update.UpdateLogTest.testApplyPartialUpdatesDependingOnNonAddShouldThrowException

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([BE0FD42F5CF3873C:E9605C709EDFD853]:0)
at org.apache.solr.update.UpdateLogTest.ulogAdd(UpdateLogTest.java:255)
at 

[jira] [Comment Edited] (SOLR-9401) TestPKIAuthenticationPlugin NPE

2017-02-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888622#comment-15888622
 ] 

Steve Rowe edited comment on SOLR-9401 at 2/28/17 6:42 PM:
---

{noformat}
   [junit4]   2> 6377 ERROR 
(TEST-TestPKIAuthenticationPlugin.test-seed#[5C3D870A565F6F53]) [] 
o.a.s.s.PKIAuthenticationPlugin Invalid key request timestamp: 1488287736283 , 
received timestamp: 1488287741616 , TTL: 5000
[...]
   [junit4] FAILURE 6.05s | TestPKIAuthenticationPlugin.test <<<
   [junit4]> Throwable #1: java.lang.AssertionError: No principal obtained
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([5C3D870A565F6F53:D469B8D0F8A302AB]:0)
   [junit4]>at 
org.apache.solr.security.TestPKIAuthenticationPlugin.run(TestPKIAuthenticationPlugin.java:169)
   [junit4]>at 
org.apache.solr.security.TestPKIAuthenticationPlugin.test(TestPKIAuthenticationPlugin.java:101)
{noformat}

Note that doAuthenticate() was only invoked once, and so the TTL was exceeded 
only once, even though retry loop executed 3 times.  One way to fix: move the 
lambda execution to be inside of the retry loop.


was (Author: steve_rowe):
{noformat}
   [junit4]   2> 6377 ERROR 
(TEST-TestPKIAuthenticationPlugin.test-seed#[5C3D870A565F6F53]) [] 
o.a.s.s.PKIAuthenticationPlugin Invalid key request timestamp
[...]
   [junit4] FAILURE 6.05s | TestPKIAuthenticationPlugin.test <<<
   [junit4]> Throwable #1: java.lang.AssertionError: No principal obtained
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([5C3D870A565F6F53:D469B8D0F8A302AB]:0)
   [junit4]>at 
org.apache.solr.security.TestPKIAuthenticationPlugin.run(TestPKIAuthenticationPlugin.java:169)
   [junit4]>at 
org.apache.solr.security.TestPKIAuthenticationPlugin.test(TestPKIAuthenticationPlugin.java:101)
{noformat}

Note that doAuthenticate() was only invoked once, and so the TTL was exceeded 
only once, even though retry loop executed 3 times.  One way to fix: move the 
lambda execution to be inside of the retry loop.

> TestPKIAuthenticationPlugin NPE
> ---
>
> Key: SOLR-9401
> URL: https://issues.apache.org/jira/browse/SOLR-9401
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Noble Paul
> Attachments: SOLR-9401.patch, SOLR-9401.patch
>
>
> Failure from my Jenkins, doesn't reproduce for me (this is 
> {{tests-failures.txt}}):
> {noformat}
>   2> Creating dataDir: 
> /var/lib/jenkins/jobs/Lucene-Solr-tests-master/workspace/solr/build/solr-core/test/J7/temp/solr.security.TestPKIAuthenticationPlugi
> n_7AC33B2240CB767D-001/init-core-data-001
>   2> 14521 INFO  
> (SUITE-TestPKIAuthenticationPlugin-seed#[7AC33B2240CB767D]-worker) [] 
> o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (fal
> se) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, 
> clientAuth=NaN)
>   2> 14540 INFO  
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.SolrTestCaseJ4 ###Starting test
>   2> 15553 ERROR 
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.s.PKIAuthenticationPlugin No SolrAuth header present
>   2> 15843 ERROR 
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.s.PKIAuthenticationPlugin Invalid key request timestamp: 9 ,
>  received timestamp: 1470760833176 , TTL: 5000
>   2> 15843 INFO  
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.SolrTestCaseJ4 ###Ending test
>   2> NOTE: download the large Jenkins line-docs file by running 'ant 
> get-jenkins-line-docs' in the lucene directory.
>   2> NOTE: reproduce with: ant test  -Dtestcase=TestPKIAuthenticationPlugin 
> -Dtests.method=test -Dtests.seed=7AC33B2240CB767D -Dtests.slow=true -Dtests.li
> nedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=cs -Dtests.timezone=Europe/Tirane -Dtests.asserts=true 
> -Dtests.file.encoding=U
> TF-8
> [12:40:32.094] ERROR   1.35s J7  | TestPKIAuthenticationPlugin.test <<<
>> Throwable #1: java.lang.NullPointerException
>>at 
> __randomizedtesting.SeedInfo.seed([7AC33B2240CB767D:F29704F8EE371B85]:0)
>>at 
> org.apache.solr.security.TestPKIAuthenticationPlugin.test(TestPKIAuthenticationPlugin.java:144)
> [...]
>   2> 15867 INFO  
> (SUITE-TestPKIAuthenticationPlugin-seed#[7AC33B2240CB767D]-worker) [] 
> o.a.s.SolrTestCaseJ4 ###deleteCore
>   2> NOTE: leaving temporary files on disk at: 
> /var/lib/jenkins/jobs/Lucene-Solr-tests-master/workspace/solr/build/solr-core/test/J7/temp/solr.security.TestPKIAuthenticationPlugin_7AC33B2240CB767D-001
>   2> NOTE: test params are: codec=Asserting(Lucene62): {}, docValues:{}, 
> 

[jira] [Commented] (SOLR-9401) TestPKIAuthenticationPlugin NPE

2017-02-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888622#comment-15888622
 ] 

Steve Rowe commented on SOLR-9401:
--

{noformat}
   [junit4]   2> 6377 ERROR 
(TEST-TestPKIAuthenticationPlugin.test-seed#[5C3D870A565F6F53]) [] 
o.a.s.s.PKIAuthenticationPlugin Invalid key request timestamp
[...]
   [junit4] FAILURE 6.05s | TestPKIAuthenticationPlugin.test <<<
   [junit4]> Throwable #1: java.lang.AssertionError: No principal obtained
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([5C3D870A565F6F53:D469B8D0F8A302AB]:0)
   [junit4]>at 
org.apache.solr.security.TestPKIAuthenticationPlugin.run(TestPKIAuthenticationPlugin.java:169)
   [junit4]>at 
org.apache.solr.security.TestPKIAuthenticationPlugin.test(TestPKIAuthenticationPlugin.java:101)
{noformat}

Note that doAuthenticate() was only invoked once, and so the TTL was exceeded 
only once, even though retry loop executed 3 times.  One way to fix: move the 
lambda execution to be inside of the retry loop.

> TestPKIAuthenticationPlugin NPE
> ---
>
> Key: SOLR-9401
> URL: https://issues.apache.org/jira/browse/SOLR-9401
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Noble Paul
> Attachments: SOLR-9401.patch, SOLR-9401.patch
>
>
> Failure from my Jenkins, doesn't reproduce for me (this is 
> {{tests-failures.txt}}):
> {noformat}
>   2> Creating dataDir: 
> /var/lib/jenkins/jobs/Lucene-Solr-tests-master/workspace/solr/build/solr-core/test/J7/temp/solr.security.TestPKIAuthenticationPlugi
> n_7AC33B2240CB767D-001/init-core-data-001
>   2> 14521 INFO  
> (SUITE-TestPKIAuthenticationPlugin-seed#[7AC33B2240CB767D]-worker) [] 
> o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (fal
> se) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, 
> clientAuth=NaN)
>   2> 14540 INFO  
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.SolrTestCaseJ4 ###Starting test
>   2> 15553 ERROR 
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.s.PKIAuthenticationPlugin No SolrAuth header present
>   2> 15843 ERROR 
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.s.PKIAuthenticationPlugin Invalid key request timestamp: 9 ,
>  received timestamp: 1470760833176 , TTL: 5000
>   2> 15843 INFO  
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.SolrTestCaseJ4 ###Ending test
>   2> NOTE: download the large Jenkins line-docs file by running 'ant 
> get-jenkins-line-docs' in the lucene directory.
>   2> NOTE: reproduce with: ant test  -Dtestcase=TestPKIAuthenticationPlugin 
> -Dtests.method=test -Dtests.seed=7AC33B2240CB767D -Dtests.slow=true -Dtests.li
> nedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=cs -Dtests.timezone=Europe/Tirane -Dtests.asserts=true 
> -Dtests.file.encoding=U
> TF-8
> [12:40:32.094] ERROR   1.35s J7  | TestPKIAuthenticationPlugin.test <<<
>> Throwable #1: java.lang.NullPointerException
>>at 
> __randomizedtesting.SeedInfo.seed([7AC33B2240CB767D:F29704F8EE371B85]:0)
>>at 
> org.apache.solr.security.TestPKIAuthenticationPlugin.test(TestPKIAuthenticationPlugin.java:144)
> [...]
>   2> 15867 INFO  
> (SUITE-TestPKIAuthenticationPlugin-seed#[7AC33B2240CB767D]-worker) [] 
> o.a.s.SolrTestCaseJ4 ###deleteCore
>   2> NOTE: leaving temporary files on disk at: 
> /var/lib/jenkins/jobs/Lucene-Solr-tests-master/workspace/solr/build/solr-core/test/J7/temp/solr.security.TestPKIAuthenticationPlugin_7AC33B2240CB767D-001
>   2> NOTE: test params are: codec=Asserting(Lucene62): {}, docValues:{}, 
> maxPointsInLeafNode=752, maxMBSortInHeap=5.390190554185364, 
> sim=RandomSimilarity(queryNorm=true): {}, locale=cs, timezone=Europe/Tirane
>   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 1.8.0_77 
> (64-bit)/cpus=16,threads=1,free=255922760,total=336592896
>   2> NOTE: All tests run in this JVM: [TestIndexingPerformance, 
> TestPKIAuthenticationPlugin]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_121) - Build # 757 - Unstable!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/757/
Java: 32bit/jdk1.8.0_121 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Expected a collection with one shard and two replicas null Last available 
state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"MissingSegmentRecoveryTest_shard1_replica1",   
"base_url":"http://127.0.0.1:49692/solr;,   
"node_name":"127.0.0.1:49692_solr",   "state":"active",   
"leader":"true"}, "core_node2":{   
"core":"MissingSegmentRecoveryTest_shard1_replica2",   
"base_url":"http://127.0.0.1:49689/solr;,   
"node_name":"127.0.0.1:49689_solr",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected a collection with one shard and two replicas
null
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"MissingSegmentRecoveryTest_shard1_replica1",
  "base_url":"http://127.0.0.1:49692/solr;,
  "node_name":"127.0.0.1:49692_solr",
  "state":"active",
  "leader":"true"},
"core_node2":{
  "core":"MissingSegmentRecoveryTest_shard1_replica2",
  "base_url":"http://127.0.0.1:49689/solr;,
  "node_name":"127.0.0.1:49689_solr",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([7609405BFB76EC91:265CD858A2575A8C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:265)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Commented] (SOLR-10215) Cannot use the namenode for HDFS HA as of Solr 6.4

2017-02-28 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888613#comment-15888613
 ] 

Cassandra Targett commented on SOLR-10215:
--

bq. The contents of /etc/hadoop/conf/ has the correct core-site.xml and 
hadoop-site.xml?

Yeah, I'm pretty sure it is. Solr 6.3 works with the same exact Hadoop cluster, 
and I copied/pasted the params from the 6.4.1 solr.in.sh to the 6.3.0 
solr.in.sh.

> Cannot use the namenode for HDFS HA as of Solr 6.4
> --
>
> Key: SOLR-10215
> URL: https://issues.apache.org/jira/browse/SOLR-10215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.4.1
>Reporter: Cassandra Targett
>Priority: Blocker
> Fix For: 6.4.2
>
>
> As of Solr 6.4, it seems it's no longer possible to use a namenode instead of 
> a server address with the {{solr.hdfs.home}} parameter when configuring Solr 
> with HDFS high availability (HA).
> Startup is fine, but when trying to create a collection, this error is in the 
> logs:
> {code}
> 2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1  
> x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core 
> [testing_shard1_replica1]: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> org.apache.solr.common.SolrException: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> {code}
> And after the full stack trace (which I will put in a comment), there is this:
> {code}
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> mycluster
> {code}
> I started Solr with the params configured as system params instead of in 
> {{solrconfig.xml}}, so my {{solr.in.sh}} has this:
> {code}
> SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS 
> -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
> -Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index 
> -Dsolr.hdfs.confdir=/etc/hadoop/conf/"
> {code}
> Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP 
> 2.5).
> I tried with a couple variations of defining the Solr home parameter:
> * {{hdfs://mycluster:8020/solr-index}}
> * {{hdfs://mycluster/solr-index}}
> * {{solr-index}}
> None of these variations worked with Solr 6.4.1 (the first 2 got the same 
> error as above, the last was just wrong so it got a different error).
> I believe this problem is isolated to Solr 6.4.x. I tested the same setup (as 
> in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server 
> address also works fine, but that negates the High Availability feature 
> (which is like failover, for those who don't know).
> _edit: the problem isn't just 6.4.1, I believe it's probably in 6.4.0 also_



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9401) TestPKIAuthenticationPlugin NPE

2017-02-28 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888578#comment-15888578
 ] 

Noble Paul commented on SOLR-9401:
--

can u share the new failure stacktrace?

> TestPKIAuthenticationPlugin NPE
> ---
>
> Key: SOLR-9401
> URL: https://issues.apache.org/jira/browse/SOLR-9401
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Noble Paul
> Attachments: SOLR-9401.patch, SOLR-9401.patch
>
>
> Failure from my Jenkins, doesn't reproduce for me (this is 
> {{tests-failures.txt}}):
> {noformat}
>   2> Creating dataDir: 
> /var/lib/jenkins/jobs/Lucene-Solr-tests-master/workspace/solr/build/solr-core/test/J7/temp/solr.security.TestPKIAuthenticationPlugi
> n_7AC33B2240CB767D-001/init-core-data-001
>   2> 14521 INFO  
> (SUITE-TestPKIAuthenticationPlugin-seed#[7AC33B2240CB767D]-worker) [] 
> o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (fal
> se) via: @org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, 
> clientAuth=NaN)
>   2> 14540 INFO  
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.SolrTestCaseJ4 ###Starting test
>   2> 15553 ERROR 
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.s.PKIAuthenticationPlugin No SolrAuth header present
>   2> 15843 ERROR 
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.s.PKIAuthenticationPlugin Invalid key request timestamp: 9 ,
>  received timestamp: 1470760833176 , TTL: 5000
>   2> 15843 INFO  
> (TEST-TestPKIAuthenticationPlugin.test-seed#[7AC33B2240CB767D]) [] 
> o.a.s.SolrTestCaseJ4 ###Ending test
>   2> NOTE: download the large Jenkins line-docs file by running 'ant 
> get-jenkins-line-docs' in the lucene directory.
>   2> NOTE: reproduce with: ant test  -Dtestcase=TestPKIAuthenticationPlugin 
> -Dtests.method=test -Dtests.seed=7AC33B2240CB767D -Dtests.slow=true -Dtests.li
> nedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=cs -Dtests.timezone=Europe/Tirane -Dtests.asserts=true 
> -Dtests.file.encoding=U
> TF-8
> [12:40:32.094] ERROR   1.35s J7  | TestPKIAuthenticationPlugin.test <<<
>> Throwable #1: java.lang.NullPointerException
>>at 
> __randomizedtesting.SeedInfo.seed([7AC33B2240CB767D:F29704F8EE371B85]:0)
>>at 
> org.apache.solr.security.TestPKIAuthenticationPlugin.test(TestPKIAuthenticationPlugin.java:144)
> [...]
>   2> 15867 INFO  
> (SUITE-TestPKIAuthenticationPlugin-seed#[7AC33B2240CB767D]-worker) [] 
> o.a.s.SolrTestCaseJ4 ###deleteCore
>   2> NOTE: leaving temporary files on disk at: 
> /var/lib/jenkins/jobs/Lucene-Solr-tests-master/workspace/solr/build/solr-core/test/J7/temp/solr.security.TestPKIAuthenticationPlugin_7AC33B2240CB767D-001
>   2> NOTE: test params are: codec=Asserting(Lucene62): {}, docValues:{}, 
> maxPointsInLeafNode=752, maxMBSortInHeap=5.390190554185364, 
> sim=RandomSimilarity(queryNorm=true): {}, locale=cs, timezone=Europe/Tirane
>   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 1.8.0_77 
> (64-bit)/cpus=16,threads=1,free=255922760,total=336592896
>   2> NOTE: All tests run in this JVM: [TestIndexingPerformance, 
> TestPKIAuthenticationPlugin]
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9913) LIR should continue on SocketTimeoutException

2017-02-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888566#comment-15888566
 ] 

Mark Miller commented on SOLR-9913:
---

Also, it would be great if we could get our unit tests to catch this case with 
our jetty proxy stuff.

> LIR should continue on SocketTimeoutException
> -
>
> Key: SOLR-9913
> URL: https://issues.apache.org/jira/browse/SOLR-9913
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9913.patch
>
>
> When I run jepsen tests on latest source. Some node can not recovery on time 
> because LIR did not continue trying on SocketTimeoutException.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6819) Deprecate index-time boosts?

2017-02-28 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888561#comment-15888561
 ] 

Uwe Schindler commented on LUCENE-6819:
---

+1 to remove index time boost. I always recommend to user to add doc values 
fields and use a function query (its just wrapping the query, very easy 
anyways!). About Solr users: I don't even know if it is at all possible with 
Solr to add index time boosts?

> Deprecate index-time boosts?
> 
>
> Key: LUCENE-6819
> URL: https://issues.apache.org/jira/browse/LUCENE-6819
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6819-wip.patch
>
>
> Follow-up of this comment: 
> https://issues.apache.org/jira/browse/LUCENE-6818?focusedCommentId=14934801=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14934801
> Index-time boosts are a very expert feature whose behaviour is tight to the 
> Similarity impl. Additionally users have often be confused by the poor 
> precision due to the fact that we encode values on a single byte. But now we 
> have doc values that allow you to encode any values the way you want with as 
> much precision as you need so maybe we should deprecate index-time boosts and 
> recommend to encode index-time scoring factors into doc values fields instead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6819) Deprecate index-time boosts?

2017-02-28 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6819:
-
Attachment: LUCENE-6819-wip.patch

Here's a patch in case someone would like to run some relevancy tests. I goes 
even further and uses a completely different encoding that stores lengths in a 
byte. It is fully accurate up to 40 and then accuracy degrades linearly with 
the log of the length. It has a restriction that it does not support index 
boosts, but on the other hand, making assumptions that index boosts are not 
used allows it to make the 256 values useful, while with the current encoding, 
if index boosts are not used, only 63 values represent valid lengths: other 
values are either less than 1 or greater than MAX_VALUE.

The patch is just a proof of concept and does not try to tackle the removal of 
index-time boosts or backward compatibility, which are the hard problems here.

> Deprecate index-time boosts?
> 
>
> Key: LUCENE-6819
> URL: https://issues.apache.org/jira/browse/LUCENE-6819
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-6819-wip.patch
>
>
> Follow-up of this comment: 
> https://issues.apache.org/jira/browse/LUCENE-6818?focusedCommentId=14934801=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14934801
> Index-time boosts are a very expert feature whose behaviour is tight to the 
> Similarity impl. Additionally users have often be confused by the poor 
> precision due to the fact that we encode values on a single byte. But now we 
> have doc values that allow you to encode any values the way you want with as 
> much precision as you need so maybe we should deprecate index-time boosts and 
> recommend to encode index-time scoring factors into doc values fields instead.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10214) clean up BlockCache metrics

2017-02-28 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888533#comment-15888533
 ] 

Yonik Seeley commented on SOLR-10214:
-

bq. Perhaps this is a good opportunity to use the new metrics API here?

Oops, sorry Andrzej, I previously missed this comment.
I don't know anything about the new metrics API yet, and I was just doing some 
simple cleanup here in pursuit of SOLR-10205 (I wanted to start tracking 
storeFails).
I'll keep this issue open for now in case someone wants to tackle converting to 
the new API... we can just tack that onto this issue if it's before 6.5

> clean up BlockCache metrics
> ---
>
> Key: SOLR-10214
> URL: https://issues.apache.org/jira/browse/SOLR-10214
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Yonik Seeley
> Attachments: SOLR-10214.patch, SOLR-10214.patch
>
>
> Many (most) of the block cache metrics are unused (I assume just inherited 
> from Blur) and unmaintained (i.e. most will be 0).  Currently only the size 
> and number of evictions is tracked.
> We should remove unused stats and start tracking
> - number of lookups (or number of misses)
> - number of hits
> - number of inserts
> - number of store failures



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10214) clean up BlockCache metrics

2017-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888522#comment-15888522
 ] 

ASF subversion and git services commented on SOLR-10214:


Commit 5af1b8ad455a86dfe26cbda4889da5c1aa11ce31 in lucene-solr's branch 
refs/heads/branch_6x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5af1b8a ]

SOLR-10214: clean up BlockCache Metrics, add storeFails and counts


> clean up BlockCache metrics
> ---
>
> Key: SOLR-10214
> URL: https://issues.apache.org/jira/browse/SOLR-10214
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Yonik Seeley
> Attachments: SOLR-10214.patch, SOLR-10214.patch
>
>
> Many (most) of the block cache metrics are unused (I assume just inherited 
> from Blur) and unmaintained (i.e. most will be 0).  Currently only the size 
> and number of evictions is tracked.
> We should remove unused stats and start tracking
> - number of lookups (or number of misses)
> - number of hits
> - number of inserts
> - number of store failures



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Need to modify boolean AND search

2017-02-28 Thread Nilesh Kamani
Hello All,

I want to modify a boolean AND search.
Just to give an example.
If somebody searches for +A +B +C, but if there is no document which
contains all three phrases, it should return the least +A +B or +A +C.
Could you please tell me which classes will I need to modify for this ?


Thanks,
Nilesh Kamani


[jira] [Commented] (SOLR-10214) clean up BlockCache metrics

2017-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888521#comment-15888521
 ] 

ASF subversion and git services commented on SOLR-10214:


Commit 34bb7f31e546856094cb378b9d12c9ac7540e7e2 in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=34bb7f3 ]

SOLR-10214: clean up BlockCache Metrics, add storeFails and counts


> clean up BlockCache metrics
> ---
>
> Key: SOLR-10214
> URL: https://issues.apache.org/jira/browse/SOLR-10214
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Yonik Seeley
> Attachments: SOLR-10214.patch, SOLR-10214.patch
>
>
> Many (most) of the block cache metrics are unused (I assume just inherited 
> from Blur) and unmaintained (i.e. most will be 0).  Currently only the size 
> and number of evictions is tracked.
> We should remove unused stats and start tracking
> - number of lookups (or number of misses)
> - number of hits
> - number of inserts
> - number of store failures



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9045) make RecoveryStrategy settings configurable

2017-02-28 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888516#comment-15888516
 ] 

Christine Poerschke commented on SOLR-9045:
---

Working branch revived and 'custom implementations not officially supported' 
style comments added; the motivation here remains unchanged i.e. to support 
customisation/configuration of alternative network interfaces so that 
copy-all-data replication traffic can be separated from regular live 
within-cloud traffic.

https://github.com/apache/lucene-solr/compare/jira/solr-9045 is the updated 
working branch, additional comments, reviews, etc. welcome as usual. Hoping to 
commit the changes sometime next week.

(Have refrained from renaming RecoveryStrategy to something else since it 
wasn't obvious what the new name should be.)

> make RecoveryStrategy settings configurable
> ---
>
> Key: SOLR-9045
> URL: https://issues.apache.org/jira/browse/SOLR-9045
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
>
> objectives:
>  * to allow users to change RecoveryStrategy settings such as maxRetries and 
> startingRecoveryDelay
>  * to support configuration of a custom recovery strategy e.g. SOLR-9044
> patch summary:
>  * support for optional  solrconfig.xml element added (if 
> element is present then its class attribute is optional)
>  * RecoveryStrategy settings now have getters/setters
>  * RecoveryStrategy.Builder added (and RecoveryStrategy constructor made 
> non-public in favour of RecoveryStrategy.Builder.create)
>  * protected RecoveryStrategy.getReplicateLeaderUrl method factored out 
> (ConfigureRecoveryStrategyTest$CustomRecoveryStrategyBuilder test illustrates 
> how SOLR-9044 might override the method)
>  * ConfigureRecoveryStrategyTest.java using 
> solrconfig-configurerecoverystrategy.xml or 
> solrconfig-customrecoverystrategy.xml
> illustrative solrconfig.xml snippets:
>  * change a RecoveryStrategy setting
> {code}
>   
> 250
>   
> {code}
> * configure a custom class
> {code}
>class="org.apache.solr.core.ConfigureRecoveryStrategyTest$CustomRecoveryStrategyBuilder">
> recovery_base_url
>   
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9555) Leader incorrectly publishes state for replica when it puts replica into LIR.

2017-02-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888501#comment-15888501
 ] 

Mark Miller commented on SOLR-9555:
---

This is a great start Mike, I'll take a look. One thing I have been thinking 
about is perhaps queuing up LIR publishes to ZK for a second or 2 rather than 
hitting it for every update. You can index thousands of documents per second 
and they can fail by the thousands, so it would be nice to have a little 
throttle on ZK communication.

bq.  Might need a flag to mark the node as dead locally or something like that.

I don't know that it's critical, because this would always be a problem to a 
lesser degree anyway, but this is probably a good idea. I don't know how tricky 
it ends up being, but seems like we could locally mark the state as down until 
we notice it's state change in the clusterstate.

> Leader incorrectly publishes state for replica when it puts replica into LIR.
> -
>
> Key: SOLR-9555
> URL: https://issues.apache.org/jira/browse/SOLR-9555
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Alan Woodward
> Attachments: SOLR-9555-WIP.patch
>
>
> See 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17888/consoleFull 
> for an example



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10217) Add a query for the background set to the significantTerms streaming expression

2017-02-28 Thread Gethin James (JIRA)
Gethin James created SOLR-10217:
---

 Summary: Add a query for the background set to the 
significantTerms streaming expression
 Key: SOLR-10217
 URL: https://issues.apache.org/jira/browse/SOLR-10217
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Gethin James


Following the work on SOLR-10156 we now have a significantTerms expression.

Currently, the background set is always the full index.  It would be great if 
we could use a query to define the background set.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8776) Support RankQuery in grouping

2017-02-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888478#comment-15888478
 ] 

ASF GitHub Bot commented on SOLR-8776:
--

GitHub user diegoceccarelli opened a pull request:

https://github.com/apache/lucene-solr/pull/162

SOLR-8776: Support RankQuery in grouping

Update SOLR-8776 to current master
  - Reranking and grouping work together in non-distributed setting when 
grouping is done by field
  - Still have to fix for distribute setting and for grouping based on the 
unique values of a function query. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr master-solr-8776

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/162.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #162


commit cd33172184c3889dfe95c631e7c30729f1c752a3
Author: diego 
Date:   2017-02-28T10:28:32Z

SOLR-8776: Support RankQuery in grouping




> Support RankQuery in grouping
> -
>
> Key: SOLR-8776
> URL: https://issues.apache.org/jira/browse/SOLR-8776
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 6.0
>Reporter: Diego Ceccarelli
>Priority: Minor
> Fix For: 6.0
>
> Attachments: 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch
>
>
> Currently it is not possible to use RankQuery [1] and Grouping [2] together 
> (see also [3]). In some situations Grouping can be replaced by Collapse and 
> Expand Results [4] (that supports reranking), but i) collapse cannot 
> guarantee that at least a minimum number of groups will be returned for a 
> query, and ii) in the Solr Cloud setting you will have constraints on how to 
> partition the documents among the shards.
> I'm going to start working on supporting RankQuery in grouping. I'll start 
> attaching a patch with a test that fails because grouping does not support 
> the rank query and then I'll try to fix the problem, starting from the non 
> distributed setting (GroupingSearch).
> My feeling is that since grouping is mostly performed by Lucene, RankQuery 
> should be refactored and moved (or partially moved) there. 
> Any feedback is welcome.
> [1] https://cwiki.apache.org/confluence/display/solr/RankQuery+API 
> [2] https://cwiki.apache.org/confluence/display/solr/Result+Grouping
> [3] 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201507.mbox/%3ccahm-lpuvspest-sw63_8a6gt-wor6ds_t_nb2rope93e4+s...@mail.gmail.com%3E
> [4] 
> https://cwiki.apache.org/confluence/display/solr/Collapse+and+Expand+Results



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #162: SOLR-8776: Support RankQuery in grouping

2017-02-28 Thread diegoceccarelli
GitHub user diegoceccarelli opened a pull request:

https://github.com/apache/lucene-solr/pull/162

SOLR-8776: Support RankQuery in grouping

Update SOLR-8776 to current master
  - Reranking and grouping work together in non-distributed setting when 
grouping is done by field
  - Still have to fix for distribute setting and for grouping based on the 
unique values of a function query. 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr master-solr-8776

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/162.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #162


commit cd33172184c3889dfe95c631e7c30729f1c752a3
Author: diego 
Date:   2017-02-28T10:28:32Z

SOLR-8776: Support RankQuery in grouping




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 297 - Still Unstable

2017-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/297/

2 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
document count mismatch.  control=305 sum(shards)=304 cloudClient=304

Stack Trace:
java.lang.AssertionError: document count mismatch.  control=305 sum(shards)=304 
cloudClient=304
at 
__randomizedtesting.SeedInfo.seed([C54B093F53762F4F:4D1F36E5FD8A42B7]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.checkShardConsistency(AbstractFullDistribZkTestBase.java:1332)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:226)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7717) UnifiedHighlighter don't work with russian PrefixQuery

2017-02-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888464#comment-15888464
 ] 

David Smiley commented on LUCENE-7717:
--

At some point _after_ MultiTermHighlighting.java was first written, PrefixQuery 
was altered to be a subclass of AutomatonQuery.  So PrefixQuery detection could 
simply be removed now, I think, since it's handled via AutomatonQuery condition.

I'm working on debugging to see _why_ this fails & a proper test.  (the test 
would go in TestUnifiedHighlighterMTQ by the way)

> UnifiedHighlighter don't work with russian PrefixQuery
> --
>
> Key: LUCENE-7717
> URL: https://issues.apache.org/jira/browse/LUCENE-7717
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 6.3, 6.4.1
>Reporter: Dmitry Malinin
>Assignee: David Smiley
> Attachments: LUCENE-7717.patch
>
>
> UnifiedHighlighter highlighter = new UnifiedHighlighter(null, new 
> StandardAnalyzer());
> Query query = new PrefixQuery(new Term("title", "я"));
> String testData = "я";
> Object snippet = highlighter.highlightWithoutSearcher(fieldName, query, 
> testData, 1);
> System.out.printf("testData=[%s] Query=%s snippet=[%s]\n", testData, query, 
> snippet==null?null:snippet.toString());



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-7717) UnifiedHighlighter don't work with russian PrefixQuery

2017-02-28 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned LUCENE-7717:


Assignee: David Smiley

> UnifiedHighlighter don't work with russian PrefixQuery
> --
>
> Key: LUCENE-7717
> URL: https://issues.apache.org/jira/browse/LUCENE-7717
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 6.3, 6.4.1
>Reporter: Dmitry Malinin
>Assignee: David Smiley
> Attachments: LUCENE-7717.patch
>
>
> UnifiedHighlighter highlighter = new UnifiedHighlighter(null, new 
> StandardAnalyzer());
> Query query = new PrefixQuery(new Term("title", "я"));
> String testData = "я";
> Object snippet = highlighter.highlightWithoutSearcher(fieldName, query, 
> testData, 1);
> System.out.printf("testData=[%s] Query=%s snippet=[%s]\n", testData, query, 
> snippet==null?null:snippet.toString());



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10215) Cannot use the namenode for HDFS HA as of Solr 6.4

2017-02-28 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888458#comment-15888458
 ] 

Kevin Risden commented on SOLR-10215:
-

Usually I'll use the follow command to start Solr with HDFS so it pulls the 
right configs:

{code}
-Dsolr.hdfs.home=$(hdfs getconf -confKey fs.defaultFS)/apps/solr 
-Dsolr.hdfs.confdir=/etc/hadoop/conf
{code}

hdfs getconf -confKey fs.defaultFS will guarantee that the right HDFS home is 
used. It requires that /etc/hadoop/conf has the right config files.

> Cannot use the namenode for HDFS HA as of Solr 6.4
> --
>
> Key: SOLR-10215
> URL: https://issues.apache.org/jira/browse/SOLR-10215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.4.1
>Reporter: Cassandra Targett
>Priority: Blocker
> Fix For: 6.4.2
>
>
> As of Solr 6.4, it seems it's no longer possible to use a namenode instead of 
> a server address with the {{solr.hdfs.home}} parameter when configuring Solr 
> with HDFS high availability (HA).
> Startup is fine, but when trying to create a collection, this error is in the 
> logs:
> {code}
> 2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1  
> x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core 
> [testing_shard1_replica1]: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> org.apache.solr.common.SolrException: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> {code}
> And after the full stack trace (which I will put in a comment), there is this:
> {code}
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> mycluster
> {code}
> I started Solr with the params configured as system params instead of in 
> {{solrconfig.xml}}, so my {{solr.in.sh}} has this:
> {code}
> SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS 
> -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
> -Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index 
> -Dsolr.hdfs.confdir=/etc/hadoop/conf/"
> {code}
> Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP 
> 2.5).
> I tried with a couple variations of defining the Solr home parameter:
> * {{hdfs://mycluster:8020/solr-index}}
> * {{hdfs://mycluster/solr-index}}
> * {{solr-index}}
> None of these variations worked with Solr 6.4.1 (the first 2 got the same 
> error as above, the last was just wrong so it got a different error).
> I believe this problem is isolated to Solr 6.4.x. I tested the same setup (as 
> in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server 
> address also works fine, but that negates the High Availability feature 
> (which is like failover, for those who don't know).
> _edit: the problem isn't just 6.4.1, I believe it's probably in 6.4.0 also_



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7717) UnifiedHighlighter don't work with russian PrefixQuery

2017-02-28 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-7717:

Attachment: LUCENE-7717.patch

Hello Dmitry. - I am attaching potential test case adapted from your code 
snippet (no pun intended) in the description. The test passes locally for me 
though. Could you perhaps try running it locally too and adapt/adjust it and 
with/without the MultiTermHighlighting change you mention? Thanks. - Christine

> UnifiedHighlighter don't work with russian PrefixQuery
> --
>
> Key: LUCENE-7717
> URL: https://issues.apache.org/jira/browse/LUCENE-7717
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/highlighter
>Affects Versions: 6.3, 6.4.1
>Reporter: Dmitry Malinin
> Attachments: LUCENE-7717.patch
>
>
> UnifiedHighlighter highlighter = new UnifiedHighlighter(null, new 
> StandardAnalyzer());
> Query query = new PrefixQuery(new Term("title", "я"));
> String testData = "я";
> Object snippet = highlighter.highlightWithoutSearcher(fieldName, query, 
> testData, 1);
> System.out.printf("testData=[%s] Query=%s snippet=[%s]\n", testData, query, 
> snippet==null?null:snippet.toString());



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10215) Cannot use the namenode for HDFS HA as of Solr 6.4

2017-02-28 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888450#comment-15888450
 ] 

Kevin Risden commented on SOLR-10215:
-

The contents of /etc/hadoop/conf/ has the correct core-site.xml and 
hadoop-site.xml?

> Cannot use the namenode for HDFS HA as of Solr 6.4
> --
>
> Key: SOLR-10215
> URL: https://issues.apache.org/jira/browse/SOLR-10215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.4.1
>Reporter: Cassandra Targett
>Priority: Blocker
> Fix For: 6.4.2
>
>
> As of Solr 6.4, it seems it's no longer possible to use a namenode instead of 
> a server address with the {{solr.hdfs.home}} parameter when configuring Solr 
> with HDFS high availability (HA).
> Startup is fine, but when trying to create a collection, this error is in the 
> logs:
> {code}
> 2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1  
> x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core 
> [testing_shard1_replica1]: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> org.apache.solr.common.SolrException: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> {code}
> And after the full stack trace (which I will put in a comment), there is this:
> {code}
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> mycluster
> {code}
> I started Solr with the params configured as system params instead of in 
> {{solrconfig.xml}}, so my {{solr.in.sh}} has this:
> {code}
> SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS 
> -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
> -Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index 
> -Dsolr.hdfs.confdir=/etc/hadoop/conf/"
> {code}
> Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP 
> 2.5).
> I tried with a couple variations of defining the Solr home parameter:
> * {{hdfs://mycluster:8020/solr-index}}
> * {{hdfs://mycluster/solr-index}}
> * {{solr-index}}
> None of these variations worked with Solr 6.4.1 (the first 2 got the same 
> error as above, the last was just wrong so it got a different error).
> I believe this problem is isolated to Solr 6.4.x. I tested the same setup (as 
> in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server 
> address also works fine, but that negates the High Availability feature 
> (which is like failover, for those who don't know).
> _edit: the problem isn't just 6.4.1, I believe it's probably in 6.4.0 also_



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10215) Cannot use the namenode for HDFS HA as of Solr 6.4

2017-02-28 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-10215:
-
Description: 
As of Solr 6.4, it seems it's no longer possible to use a namenode instead of a 
server address with the {{solr.hdfs.home}} parameter when configuring Solr with 
HDFS high availability (HA).

Startup is fine, but when trying to create a collection, this error is in the 
logs:

{code}
2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1  
x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core 
[testing_shard1_replica1]: Error Instantiating Update Handler, 
solr.DirectUpdateHandler2 failed to instantiate 
org.apache.solr.update.UpdateHandler
org.apache.solr.common.SolrException: Error Instantiating Update Handler, 
solr.DirectUpdateHandler2 failed to instantiate 
org.apache.solr.update.UpdateHandler
{code}

And after the full stack trace (which I will put in a comment), there is this:

{code}
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
mycluster
{code}

I started Solr with the params configured as system params instead of in 
{{solrconfig.xml}}, so my {{solr.in.sh}} has this:

{code}
SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS 
-Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
-Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index 
-Dsolr.hdfs.confdir=/etc/hadoop/conf/"
{code}

Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP 2.5).

I tried with a couple variations of defining the Solr home parameter:

* {{hdfs://mycluster:8020/solr-index}}
* {{hdfs://mycluster/solr-index}}
* {{solr-index}}

None of these variations worked with Solr 6.4.1 (the first 2 got the same error 
as above, the last was just wrong so it got a different error).

I believe this problem is isolated to Solr 6.4.x. I tested the same setup (as 
in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server 
address also works fine, but that negates the High Availability feature (which 
is like failover, for those who don't know).

_edit: the problem isn't just 6.4.1, I believe it's probably in 6.4.0 also_

  was:
As of Solr 6.4, it seems it's no longer possible to use a namenode instead of a 
server address with the {{solr.hdfs.home}} parameter when configuring Solr with 
HDFS high availability (HA).

Startup is fine, but when trying to create a collection, this error is in the 
logs:

{code}
2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1  
x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core 
[testing_shard1_replica1]: Error Instantiating Update Handler, 
solr.DirectUpdateHandler2 failed to instantiate 
org.apache.solr.update.UpdateHandler
org.apache.solr.common.SolrException: Error Instantiating Update Handler, 
solr.DirectUpdateHandler2 failed to instantiate 
org.apache.solr.update.UpdateHandler
{code}

And after the full stack trace (which I will put in a comment), there is this:

{code}
Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
mycluster
{code}

I started Solr with the params configured as system params instead of in 
{{solrconfig.xml}}, so my {{solr.in.sh}} has this:

{code}
SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS 
-Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
-Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index 
-Dsolr.hdfs.confdir=/etc/hadoop/conf/"
{code}

Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP 2.5).

I tried with a couple variations of defining the Solr home parameter:

* {{hdfs://mycluster:8020/solr-index}}
* {{hdfs://mycluster/solr-index}}
* {{solr-index}}

None of these variations worked with Solr 6.4.1 (the first 2 got the same error 
as above, the last was just wrong so it got a different error).

I believe this problem is isolated to Solr 6.4.1. I tested the same setup (as 
in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server 
address also works fine, but that negates the High Availability feature (which 
is like failover, for those who don't know).


> Cannot use the namenode for HDFS HA as of Solr 6.4
> --
>
> Key: SOLR-10215
> URL: https://issues.apache.org/jira/browse/SOLR-10215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.4.1
>Reporter: Cassandra Targett
>Priority: Blocker
> Fix For: 6.4.2
>
>
> As of Solr 6.4, it seems it's no longer possible to use a namenode instead of 
> a server address with the {{solr.hdfs.home}} parameter when configuring Solr 
> with HDFS high availability (HA).
> Startup is fine, but when trying to create a collection, this error is in the 
> 

[jira] [Commented] (SOLR-10215) Cannot use the namenode for HDFS HA as of Solr 6.4

2017-02-28 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888429#comment-15888429
 ] 

Ishan Chattopadhyaya commented on SOLR-10215:
-

Seems to be a major regression, and should be part of 6.4.2. Marking this as a 
blocker, and holding off on the 6.4.2 RC for now, until someone can take a look.

> Cannot use the namenode for HDFS HA as of Solr 6.4
> --
>
> Key: SOLR-10215
> URL: https://issues.apache.org/jira/browse/SOLR-10215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.4.1
>Reporter: Cassandra Targett
>Priority: Blocker
> Fix For: 6.4.2
>
>
> As of Solr 6.4, it seems it's no longer possible to use a namenode instead of 
> a server address with the {{solr.hdfs.home}} parameter when configuring Solr 
> with HDFS high availability (HA).
> Startup is fine, but when trying to create a collection, this error is in the 
> logs:
> {code}
> 2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1  
> x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core 
> [testing_shard1_replica1]: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> org.apache.solr.common.SolrException: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> {code}
> And after the full stack trace (which I will put in a comment), there is this:
> {code}
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> mycluster
> {code}
> I started Solr with the params configured as system params instead of in 
> {{solrconfig.xml}}, so my {{solr.in.sh}} has this:
> {code}
> SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS 
> -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
> -Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index 
> -Dsolr.hdfs.confdir=/etc/hadoop/conf/"
> {code}
> Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP 
> 2.5).
> I tried with a couple variations of defining the Solr home parameter:
> * {{hdfs://mycluster:8020/solr-index}}
> * {{hdfs://mycluster/solr-index}}
> * {{solr-index}}
> None of these variations worked with Solr 6.4.1 (the first 2 got the same 
> error as above, the last was just wrong so it got a different error).
> I believe this problem is isolated to Solr 6.4.1. I tested the same setup (as 
> in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server 
> address also works fine, but that negates the High Availability feature 
> (which is like failover, for those who don't know).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10215) Cannot use the namenode for HDFS HA as of Solr 6.4

2017-02-28 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-10215:

Fix Version/s: 6.4.2

> Cannot use the namenode for HDFS HA as of Solr 6.4
> --
>
> Key: SOLR-10215
> URL: https://issues.apache.org/jira/browse/SOLR-10215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.4.1
>Reporter: Cassandra Targett
>Priority: Blocker
> Fix For: 6.4.2
>
>
> As of Solr 6.4, it seems it's no longer possible to use a namenode instead of 
> a server address with the {{solr.hdfs.home}} parameter when configuring Solr 
> with HDFS high availability (HA).
> Startup is fine, but when trying to create a collection, this error is in the 
> logs:
> {code}
> 2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1  
> x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core 
> [testing_shard1_replica1]: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> org.apache.solr.common.SolrException: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> {code}
> And after the full stack trace (which I will put in a comment), there is this:
> {code}
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> mycluster
> {code}
> I started Solr with the params configured as system params instead of in 
> {{solrconfig.xml}}, so my {{solr.in.sh}} has this:
> {code}
> SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS 
> -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
> -Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index 
> -Dsolr.hdfs.confdir=/etc/hadoop/conf/"
> {code}
> Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP 
> 2.5).
> I tried with a couple variations of defining the Solr home parameter:
> * {{hdfs://mycluster:8020/solr-index}}
> * {{hdfs://mycluster/solr-index}}
> * {{solr-index}}
> None of these variations worked with Solr 6.4.1 (the first 2 got the same 
> error as above, the last was just wrong so it got a different error).
> I believe this problem is isolated to Solr 6.4.1. I tested the same setup (as 
> in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server 
> address also works fine, but that negates the High Availability feature 
> (which is like failover, for those who don't know).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10215) Cannot use the namenode for HDFS HA as of Solr 6.4

2017-02-28 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-10215:

Priority: Blocker  (was: Major)

> Cannot use the namenode for HDFS HA as of Solr 6.4
> --
>
> Key: SOLR-10215
> URL: https://issues.apache.org/jira/browse/SOLR-10215
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: 6.4.1
>Reporter: Cassandra Targett
>Priority: Blocker
> Fix For: 6.4.2
>
>
> As of Solr 6.4, it seems it's no longer possible to use a namenode instead of 
> a server address with the {{solr.hdfs.home}} parameter when configuring Solr 
> with HDFS high availability (HA).
> Startup is fine, but when trying to create a collection, this error is in the 
> logs:
> {code}
> 2017-02-27 22:22:57.359 ERROR (qtp401424608-21) [c:testing s:shard1  
> x:testing_shard1_replica1] o.a.s.c.CoreContainer Error creating core 
> [testing_shard1_replica1]: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> org.apache.solr.common.SolrException: Error Instantiating Update Handler, 
> solr.DirectUpdateHandler2 failed to instantiate 
> org.apache.solr.update.UpdateHandler
> {code}
> And after the full stack trace (which I will put in a comment), there is this:
> {code}
> Caused by: java.lang.IllegalArgumentException: java.net.UnknownHostException: 
> mycluster
> {code}
> I started Solr with the params configured as system params instead of in 
> {{solrconfig.xml}}, so my {{solr.in.sh}} has this:
> {code}
> SOLR_OPTS="$SOLR_OPTS $SOLR_ZK_CREDS_AND_ACLS 
> -Dsolr.directoryFactory=HdfsDirectoryFactory -Dsolr.lock.type=hdfs 
> -Dsolr.hdfs.home=hdfs://mycluster:8020/solr-index 
> -Dsolr.hdfs.confdir=/etc/hadoop/conf/"
> {code}
> Solr in this case is running on the same nodes as Hadoop (Hortonworks HDP 
> 2.5).
> I tried with a couple variations of defining the Solr home parameter:
> * {{hdfs://mycluster:8020/solr-index}}
> * {{hdfs://mycluster/solr-index}}
> * {{solr-index}}
> None of these variations worked with Solr 6.4.1 (the first 2 got the same 
> error as above, the last was just wrong so it got a different error).
> I believe this problem is isolated to Solr 6.4.1. I tested the same setup (as 
> in the {{solr.in.sh}} above) with 6.3.0 and it worked fine. Using the server 
> address also works fine, but that negates the High Availability feature 
> (which is like failover, for those who don't know).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9913) LIR should continue on SocketTimeoutException

2017-02-28 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888422#comment-15888422
 ] 

Mark Miller commented on SOLR-9913:
---

Seems reasonable to me.

I'd really like to remove the need for this per update fail request. I think 
ideally, this request would go through ZK rather than attempting it directly. 
The replica would instead just watch the LIR nodes. That is also how I would 
like to get rid of the 'leader publishes down for replica issue'. We would not 
really want per update updates to ZK though, so we would probably want some 
delayed action that collects requests and only talks to ZK once every few 
seconds or something.

> LIR should continue on SocketTimeoutException
> -
>
> Key: SOLR-9913
> URL: https://issues.apache.org/jira/browse/SOLR-9913
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9913.patch
>
>
> When I run jepsen tests on latest source. Some node can not recovery on time 
> because LIR did not continue trying on SocketTimeoutException.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_121) - Build # 2964 - Failure!

2017-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2964/
Java: 64bit/jdk1.8.0_121 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 12595 lines...]
   [junit4] JVM J2: stdout was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/temp/junit4-J2-20170228_160715_0244384902294190742124.sysout
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] [CodeBlob (0x7f979106f8d0)]
   [junit4] Framesize: 60
   [junit4] Runtime Stub (0x7f979106f8d0): handle_exception_nofpu Runtime1 
stub
   [junit4] Could not load hsdis-amd64.so; library not loadable; PrintAssembly 
is disabled
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  Internal Error (sharedRuntime.cpp:834), pid=17532, 
tid=0x7f960fefe700
   [junit4] #  fatal error: exception happened outside interpreter, nmethods 
and vtable stubs at pc 0x7f979106fa3f
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (8.0_121-b13) (build 
1.8.0_121-b13)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.121-b13 mixed mode 
linux-amd64 compressed oops)
   [junit4] # Failed to write core dump. Core dumps have been disabled. To 
enable core dumping, try "ulimit -c unlimited" before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/hs_err_pid17532.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.java.com/bugreport/crash.jsp
   [junit4] #
   [junit4] <<< JVM J2: EOF 

[...truncated 615 lines...]
   [junit4] ERROR: JVM J2 ended with an exception, command line: 
/home/jenkins/tools/java/64bit/jdk1.8.0_121/jre/bin/java -XX:+UseCompressedOops 
-XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/heapdumps -ea 
-esa -Dtests.prefix=tests -Dtests.seed=26DA84EC85759A40 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=6.5.0 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=3 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/temp
 -Dcommon.dir=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene 
-Dclover.db.dir=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build/clover/db
 
-Djava.security.policy=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=6.5.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/home/jenkins/workspace/Lucene-Solr-6.x-Linux 
-Djunit4.childvm.cwd=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2
 -Djunit4.childvm.id=2 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true -Dtests.disableHdfs=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=US-ASCII -classpath 

[jira] [Commented] (LUCENE-7713) Optimize TopFieldDocCollector for the sorted case

2017-02-28 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888405#comment-15888405
 ] 

Adrien Grand commented on LUCENE-7713:
--

I played with sorting the geonames dataset on the population field and 
disabling the compareBottom call after {{numHits}} documents have been 
collected, which reduced the query time from 92ms to 17ms (5x faster) so I 
think such a change could yield serious speedups for users who would still want 
to compute the total number of hits (which means early termination is not an 
option).

> Optimize TopFieldDocCollector for the sorted case
> -
>
> Key: LUCENE-7713
> URL: https://issues.apache.org/jira/browse/LUCENE-7713
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> When the sort order is a prefix of the index sort order, 
> {{TopFieldDocCollector}} could skip reading doc values and comparing them 
> against the bottom value after {{numHits}} documents have been collected, and 
> just count matches.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9876) Reuse CountSlotArrAcc internal array for same level subFacets

2017-02-28 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888374#comment-15888374
 ] 

ASF GitHub Bot commented on SOLR-9876:
--

Github user dennisgove commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/126#discussion_r103494004
  
--- Diff: solr/core/src/java/org/apache/solr/search/facet/SlotAcc.java ---
@@ -394,7 +394,16 @@ public CountSlotAcc(FacetContext fcontext) {
   int[] result;
   public CountSlotArrAcc(FacetContext fcontext, int numSlots) {
 super(fcontext);
-result = new int[numSlots];
+
+String key = fcontext.level + this.getClass().getSimpleName();
+result = (int[]) fcontext.getReusable(key);
+if (result == null || result.length < numSlots) {
+  result = new int[numSlots];
+  fcontext.addReusable(key, result);
--- End diff --

If null != result is it reasonable to reset the value under the key to the 
newly created array?


> Reuse CountSlotArrAcc internal array for same level subFacets
> -
>
> Key: SOLR-9876
> URL: https://issues.apache.org/jira/browse/SOLR-9876
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Affects Versions: master (7.0)
>Reporter: Rustam Hashimov
>Priority: Minor
> Fix For: master (7.0)
>
>
> All facet processors are processed sequentially. We can reuse CountSlotArrAcc 
> internal array across same level facet processors instead of reallocating new 
> array for each.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #126: SOLR-9876 Reuse CountSlotArrAcc internal arra...

2017-02-28 Thread dennisgove
Github user dennisgove commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/126#discussion_r103494004
  
--- Diff: solr/core/src/java/org/apache/solr/search/facet/SlotAcc.java ---
@@ -394,7 +394,16 @@ public CountSlotAcc(FacetContext fcontext) {
   int[] result;
   public CountSlotArrAcc(FacetContext fcontext, int numSlots) {
 super(fcontext);
-result = new int[numSlots];
+
+String key = fcontext.level + this.getClass().getSimpleName();
+result = (int[]) fcontext.getReusable(key);
+if (result == null || result.length < numSlots) {
+  result = new int[numSlots];
+  fcontext.addReusable(key, result);
--- End diff --

If null != result is it reasonable to reset the value under the key to the 
newly created array?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10214) clean up BlockCache metrics

2017-02-28 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10214:

Attachment: SOLR-10214.patch

Updated patch that starts tracking store failures (i.e. not being able to cache 
a block due to contention) as well as some other little cleanups.

> clean up BlockCache metrics
> ---
>
> Key: SOLR-10214
> URL: https://issues.apache.org/jira/browse/SOLR-10214
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Yonik Seeley
> Attachments: SOLR-10214.patch, SOLR-10214.patch
>
>
> Many (most) of the block cache metrics are unused (I assume just inherited 
> from Blur) and unmaintained (i.e. most will be 0).  Currently only the size 
> and number of evictions is tracked.
> We should remove unused stats and start tracking
> - number of lookups (or number of misses)
> - number of hits
> - number of inserts
> - number of store failures



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10196) ElectionContext#runLeaderProcess can hit NPE on core close.

2017-02-28 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-10196.

   Resolution: Fixed
Fix Version/s: master (7.0)
   6.5

> ElectionContext#runLeaderProcess can hit NPE on core close.
> ---
>
> Key: SOLR-10196
> URL: https://issues.apache.org/jira/browse/SOLR-10196
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Minor
> Fix For: 6.5, master (7.0)
>
>
> {noformat}
>[junit4]   2> 191445 INFO  
> (zkCallback-7-thread-7-processing-n:127.0.0.1:45055_) [n:127.0.0.1:45055_ 
> c:solrj_collection2 s:shard2 r:core_node3 
> x:solrj_collection2_shard2_replica1] o.a.s.m.SolrMetricManager Closing metric 
> reporters for: solr.core.solrj_collection2.shard2.replica1
>[junit4]   2> 191445 INFO  
> (zkCallback-7-thread-7-processing-n:127.0.0.1:45055_) [n:127.0.0.1:45055_ 
> c:solrj_collection2 s:shard2 r:core_node3 
> x:solrj_collection2_shard2_replica1] o.a.s.s.h.HdfsDirectory Closing hdfs 
> directory 
> hdfs://localhost:34043/solr_hdfs_home/solrj_collection2/core_node3/data
>[junit4]   2> 191476 INFO  
> (zkCallback-7-thread-7-processing-n:127.0.0.1:45055_) [n:127.0.0.1:45055_ 
> c:solrj_collection2 s:shard2 r:core_node3 
> x:solrj_collection2_shard2_replica1] o.a.s.s.h.HdfsDirectory Closing hdfs 
> directory 
> hdfs://localhost:34043/solr_hdfs_home/solrj_collection2/core_node3/data/index
>[junit4]   2> 191484 INFO  
> (zkCallback-7-thread-7-processing-n:127.0.0.1:45055_) [n:127.0.0.1:45055_ 
> c:solrj_collection2 s:shard2 r:core_node3 
> x:solrj_collection2_shard2_replica1] o.a.s.s.h.HdfsDirectory Closing hdfs 
> directory 
> hdfs://localhost:34043/solr_hdfs_home/solrj_collection2/core_node3/data/snapshot_metadata
>[junit4]   2> 191523 INFO  (coreCloseExecutor-172-thread-6) 
> [n:127.0.0.1:45055_ c:solrj_collection4 s:shard5 r:core_node4 
> x:solrj_collection4_shard5_replica1] o.a.s.m.SolrMetricManager Closing metric 
> reporters for: solr.core.solrj_collection4.shard5.replica1
>[junit4]   2> 191530 INFO  
> (zkCallback-7-thread-9-processing-n:127.0.0.1:45055_) [n:127.0.0.1:45055_
> ] o.a.s.c.c.ZkStateReader A cluster state change: [WatchedEvent 
> state:SyncConnected type:NodeDataChanged 
> path:/collections/solrj_collection2/state.json] for collection 
> [solrj_collection2] has occurred - updating... (live nodes size: [1])
>[junit4]   2> 191554 INFO  (coreCloseExecutor-172-thread-6) 
> [n:127.0.0.1:45055_ c:solrj_collection4 s:shard5 r:core_node4 
> x:solrj_collection4_shard5_replica1] o.a.s.s.h.HdfsDirectory Closing hdfs 
> directory 
> hdfs://localhost:34043/solr_hdfs_home/solrj_collection4/core_node4/data/index
>[junit4]   2> 191555 ERROR 
> (zkCallback-7-thread-7-processing-n:127.0.0.1:45055_) [n:127.0.0.1:45055_ 
> c:solrj_collection2 s:shard2 r:core_node3 
> x:solrj_collection2_shard2_replica1] o.a.s.c.ShardLeaderElectionContext There 
> was a problem trying to register as the leader:java.lang.NullPointerException
>[junit4]   2>  at 
> org.apache.solr.cloud.ShardLeaderElectionContext.runLeaderProcess(ElectionContext.java:426)
>[junit4]   2>  at 
> org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:170)
>[junit4]   2>  at 
> org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:135)
>[junit4]   2>  at 
> org.apache.solr.cloud.LeaderElector.access$200(LeaderElector.java:56)
>[junit4]   2>  at 
> org.apache.solr.cloud.LeaderElector$ElectionWatcher.process(LeaderElector.java:348)
>[junit4]   2>  at 
> org.apache.solr.common.cloud.SolrZkClient$3.lambda$process$0(SolrZkClient.java:268)
>[junit4]   2>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>[junit4]   2>  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>[junit4]   2>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
>[junit4]   2>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>[junit4]   2>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>[junit4]   2>  at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10214) clean up BlockCache metrics

2017-02-28 Thread Otis Gospodnetic (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Otis Gospodnetic updated SOLR-10214:

Component/s: metrics

> clean up BlockCache metrics
> ---
>
> Key: SOLR-10214
> URL: https://issues.apache.org/jira/browse/SOLR-10214
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Yonik Seeley
> Attachments: SOLR-10214.patch
>
>
> Many (most) of the block cache metrics are unused (I assume just inherited 
> from Blur) and unmaintained (i.e. most will be 0).  Currently only the size 
> and number of evictions is tracked.
> We should remove unused stats and start tracking
> - number of lookups (or number of misses)
> - number of hits
> - number of inserts
> - number of store failures



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10213) Copy Fields: remove wiki vs. cwiki overlap (and gap)

2017-02-28 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888340#comment-15888340
 ] 

Cassandra Targett commented on SOLR-10213:
--

As I have spare time/inclination, I work through the pages in the old wiki and 
add messages pointing to Confluence. The process I follow is written out here: 
https://cwiki.apache.org/confluence/display/solr/Internal+-+Maintaining+Documentation#Internal-MaintainingDocumentation-Migrating"Official"DocumentationfromMoinMoin.
 It's ideal if the pages have a consistent message, but that text is really 
only a recommendation - my point is, though, that sample text already exists, 
you don't need to try to reinvent the wheel.

There is a list of pages to be reviewed at: 
https://wiki.apache.org/solr/WikiManualComparison. This list has a complex 
history, but probably 3 (?) years ago I asked someone to look at all the pages 
in the Wiki and compare them to the Ref Guide, and note differences in content. 
This person had no knowledge of or history with Solr, and that shows. However, 
it's still a decent list of pages that are (or aren't) replicated in the Ref 
Guide. The specific notes about the content is out-of-date, but it's a starting 
point. I've added notes to pages I've completed, from which you can see I've 
worked through the A's and B's and maybe half of the C's. Only 23 letters of 
the alphabet to go :-)

> Copy Fields: remove wiki vs. cwiki overlap (and gap)
> 
>
> Key: SOLR-10213
> URL: https://issues.apache.org/jira/browse/SOLR-10213
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Priority: Minor
>
> We just stumbled across the 'are copy fields recursive/cascading' question 
> again and on https://wiki.apache.org/solr/SchemaXml#Copy_Fields found the 
> answer which is "no" in the shape of the _The copy is done at the stream 
> source level and no copy feeds into another copy._ sentence but 
> https://cwiki.apache.org/confluence/display/solr/Copying+Fields didn't seem 
> to obviously have that answer although there is a _"... can/does copying 
> happen recursively?"_ question hidden in the comments section.
> This ticket here proposes to:
> * fully remove the wiki section content in favour of just a pointer to the 
> Solr Reference guide (cwiki)
> * review if anything on the wiki is missing and should be added to the cwiki
> * maybe: tidy up/remove some of the comments on the cwiki (the ones unrelated 
> to the cwiki page itself)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10214) clean up BlockCache metrics

2017-02-28 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10214:

Attachment: SOLR-10214.patch

Here's a draft patch that
- removes a lot of the unused metrics
- adds totals for lookups, hits, evictions... the previous metrics only 
reported per-second stats since the last call
- moves the tracking of hit/miss from BlockDirectoryCache to BlockCache

> clean up BlockCache metrics
> ---
>
> Key: SOLR-10214
> URL: https://issues.apache.org/jira/browse/SOLR-10214
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
> Attachments: SOLR-10214.patch
>
>
> Many (most) of the block cache metrics are unused (I assume just inherited 
> from Blur) and unmaintained (i.e. most will be 0).  Currently only the size 
> and number of evictions is tracked.
> We should remove unused stats and start tracking
> - number of lookups (or number of misses)
> - number of hits
> - number of inserts
> - number of store failures



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7718) buildAndPushRelease.py script should refer to working tree instead of directory

2017-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888273#comment-15888273
 ] 

ASF subversion and git services commented on LUCENE-7718:
-

Commit a30eda94441d29868c68d7e9384dcffce4bc0010 in lucene-solr's branch 
refs/heads/branch_6_4 from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a30eda9 ]

LUCENE-7718: buildAndPushRelease.py script should refer to working tree instead 
of directory


> buildAndPushRelease.py script should refer to working tree instead of 
> directory
> ---
>
> Key: LUCENE-7718
> URL: https://issues.apache.org/jira/browse/LUCENE-7718
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Ishan Chattopadhyaya
>Priority: Minor
>
> As per this commit,
> https://github.com/git/git/commit/2a0e6cdedab306eccbd297c051035c13d0266343
> the git status no longer reports:
> bq. nothing to commit, working directory clean
> but reports:
> bq. nothing to commit, working tree clean



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7718) buildAndPushRelease.py script should refer to working tree instead of directory

2017-02-28 Thread Ishan Chattopadhyaya (JIRA)
Ishan Chattopadhyaya created LUCENE-7718:


 Summary: buildAndPushRelease.py script should refer to working 
tree instead of directory
 Key: LUCENE-7718
 URL: https://issues.apache.org/jira/browse/LUCENE-7718
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ishan Chattopadhyaya
Priority: Minor


As per this commit,
https://github.com/git/git/commit/2a0e6cdedab306eccbd297c051035c13d0266343
the git status no longer reports:
bq. nothing to commit, working directory clean
but reports:
bq. nothing to commit, working tree clean



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10213) Copy Fields: remove wiki vs. cwiki overlap (and gap)

2017-02-28 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15888266#comment-15888266
 ] 

Erick Erickson edited comment on SOLR-10213 at 2/28/17 3:45 PM:


+1. Removing pages from the Wiki and/or putting big bold WARNING messages has 
been ongoing. Personally I prefer removing all the text and providing a link as 
you propose.

And I regularly delete comments if we address the question raised in the text 
or if they're just usage questions.


was (Author: erickerickson):
+1. Removing pages from the Wiki and/or putting big bold WARNING messages has 
been ongoing. Personally I prefer removing all the text and providing a link as 
you propose.

And I regularly delete comments if we address the question raised in the text.

> Copy Fields: remove wiki vs. cwiki overlap (and gap)
> 
>
> Key: SOLR-10213
> URL: https://issues.apache.org/jira/browse/SOLR-10213
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Priority: Minor
>
> We just stumbled across the 'are copy fields recursive/cascading' question 
> again and on https://wiki.apache.org/solr/SchemaXml#Copy_Fields found the 
> answer which is "no" in the shape of the _The copy is done at the stream 
> source level and no copy feeds into another copy._ sentence but 
> https://cwiki.apache.org/confluence/display/solr/Copying+Fields didn't seem 
> to obviously have that answer although there is a _"... can/does copying 
> happen recursively?"_ question hidden in the comments section.
> This ticket here proposes to:
> * fully remove the wiki section content in favour of just a pointer to the 
> Solr Reference guide (cwiki)
> * review if anything on the wiki is missing and should be added to the cwiki
> * maybe: tidy up/remove some of the comments on the cwiki (the ones unrelated 
> to the cwiki page itself)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >