[jira] [Updated] (LUCENE-6439) Create test-framework/src/test
[ https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-6439: -- Attachment: LUCENE-6439-maven.patch Here is the patch with maven support. On windows the new test currently fail with: {noformat} org.apache.lucene.mockfile.TestMockFilesystems Time elapsed: 23.615 sec ERROR! com.carrotsearch.randomizedtesting.ThreadLeakError: 9 threads leaked from SUITE scope at org.apache.lucene.mockfile.TestMockFilesyst ems: 1) Thread[id=235, name=Thread-205, state=RUNNABLE, group=TGRP-TestMockFilesystems] at sun.nio.ch.Iocp.getQueuedCompletionStatus(Native Method) at sun.nio.ch.Iocp.access$300(Iocp.java:46) at sun.nio.ch.Iocp$EventHandlerTask.run(Iocp.java:333) at java.lang.Thread.run(Thread.java:745) 2) Thread[id=239, name=Thread-209, state=RUNNABLE, group=TGRP-TestMockFilesystems] at sun.nio.ch.Iocp.getQueuedCompletionStatus(Native Method) at sun.nio.ch.Iocp.access$300(Iocp.java:46) at sun.nio.ch.Iocp$EventHandlerTask.run(Iocp.java:333) at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 3) Thread[id=243, name=Thread-213, state=RUNNABLE, group=TGRP-TestMockFilesystems] at sun.nio.ch.Iocp.getQueuedCompletionStatus(Native Method) at sun.nio.ch.Iocp.access$300(Iocp.java:46) at sun.nio.ch.Iocp$EventHandlerTask.run(Iocp.java:333) at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 4) Thread[id=237, name=Thread-207, state=RUNNABLE, group=TGRP-TestMockFilesystems] at sun.nio.ch.Iocp.getQueuedCompletionStatus(Native Method) at sun.nio.ch.Iocp.access$300(Iocp.java:46) at sun.nio.ch.Iocp$EventHandlerTask.run(Iocp.java:333) at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 5) Thread[id=238, name=Thread-208, state=RUNNABLE, group=TGRP-TestMockFilesystems] at sun.nio.ch.Iocp.getQueuedCompletionStatus(Native Method) at sun.nio.ch.Iocp.access$300(Iocp.java:46) at sun.nio.ch.Iocp$EventHandlerTask.run(Iocp.java:333) at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 6) Thread[id=241, name=Thread-211, state=RUNNABLE, group=TGRP-TestMockFilesystems] at sun.nio.ch.Iocp.getQueuedCompletionStatus(Native Method) at sun.nio.ch.Iocp.access$300(Iocp.java:46) at sun.nio.ch.Iocp$EventHandlerTask.run(Iocp.java:333) at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 7) Thread[id=242, name=Thread-212, state=RUNNABLE, group=TGRP-TestMockFilesystems] at sun.nio.ch.Iocp.getQueuedCompletionStatus(Native Method) at sun.nio.ch.Iocp.access$300(Iocp.java:46) at sun.nio.ch.Iocp$EventHandlerTask.run(Iocp.java:333) at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) 8) Thread[id=236, name=Thread-206, state=RUNNABLE, group=TGRP-TestMockFilesystems] at sun.nio.ch.Iocp.getQueuedCompletionStatus(Native Method) at sun.nio.ch.Iocp.access$300(Iocp.java:46) at sun.nio.ch.Iocp$EventHandlerTask.run(Iocp.java:333) at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at
[jira] [Created] (LUCENE-6446) Simplify Explanation API
Adrien Grand created LUCENE-6446: Summary: Simplify Explanation API Key: LUCENE-6446 URL: https://issues.apache.org/jira/browse/LUCENE-6446 Project: Lucene - Core Issue Type: Bug Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor Fix For: Trunk, 5.2 We should make this API easier to consume, for instance: - enforce important components to be non-null (eg. description) - decouple entirely the score computation from whether there is a match or not (Explanation assumes there is a match if the score is 0, you need to use ComplexExplanation to override this behaviour) - return an empty array instead of null when there are no details -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6446) Simplify Explanation API
[ https://issues.apache.org/jira/browse/LUCENE-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adrien Grand updated LUCENE-6446: - Attachment: LUCENE-6446.patch Here is a patch which removes ComplexExplanation and makes Explanation immutable. Simplify Explanation API Key: LUCENE-6446 URL: https://issues.apache.org/jira/browse/LUCENE-6446 Project: Lucene - Core Issue Type: Bug Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor Fix For: Trunk, 5.2 Attachments: LUCENE-6446.patch We should make this API easier to consume, for instance: - enforce important components to be non-null (eg. description) - decouple entirely the score computation from whether there is a match or not (Explanation assumes there is a match if the score is 0, you need to use ComplexExplanation to override this behaviour) - return an empty array instead of null when there are no details -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6439) Create test-framework/src/test
[ https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14503705#comment-14503705 ] ASF subversion and git services commented on LUCENE-6439: - Commit 1674991 from [~thetaphi] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1674991 ] Merged revision(s) 1674990 from lucene/dev/trunk: LUCENE-6439: enable support fors test-framework-tests on Maven build Create test-framework/src/test -- Key: LUCENE-6439 URL: https://issues.apache.org/jira/browse/LUCENE-6439 Project: Lucene - Core Issue Type: Test Reporter: Robert Muir Attachments: LUCENE-6439-maven.patch, LUCENE-6439.patch We have quite a few tests (~30 suites) for test-framework stuff (test-the-tester) but currently they all sit in lucene/core housed with real tests. I think we should just give test-framework a src/test and move these tests there. This makes the build simpler in the future too, because its less special. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_76) - Build # 4586 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4586/ Java: 64bit/jdk1.7.0_76 -XX:+UseCompressedOops -XX:+UseG1GC All tests passed Build Log: [...truncated 8919 lines...] [javac] Compiling 787 source files to C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\classes\java [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\response\SolrQueryResponse.java:26: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletResponse; [javac] ^ [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:20: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletRequest; [javac] ^ [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:142: error: cannot find symbol [javac] private static RTimer getRequestTimer(HttpServletRequest req) [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:152: error: cannot find symbol [javac] public SolrQueryRequest parse( SolrCore core, String path, HttpServletRequest req ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:762: error: cannot find symbol [javac] private static SolrParams autodetect(HttpServletRequest req, ArrayListContentStream streams, FastInputStream in) throws IOException { [javac]^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:444: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception; [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: interface SolrRequestParser [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:690: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(final HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class StandardRequestParser [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:543: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class MultipartRequestParser [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:520: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class RawRequestParser [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:598: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams, InputStream in) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:639: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:656: error: cannot find
[jira] [Commented] (LUCENE-6446) Simplify Explanation API
[ https://issues.apache.org/jira/browse/LUCENE-6446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504018#comment-14504018 ] Ryan Ernst commented on LUCENE-6446: +1, this is much cleaner! Simplify Explanation API Key: LUCENE-6446 URL: https://issues.apache.org/jira/browse/LUCENE-6446 Project: Lucene - Core Issue Type: Bug Reporter: Adrien Grand Assignee: Adrien Grand Priority: Minor Fix For: Trunk, 5.2 Attachments: LUCENE-6446.patch We should make this API easier to consume, for instance: - enforce important components to be non-null (eg. description) - decouple entirely the score computation from whether there is a match or not (Explanation assumes there is a match if the score is 0, you need to use ComplexExplanation to override this behaviour) - return an empty array instead of null when there are no details -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80-ea-b05) - Build # 12206 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12206/ Java: 64bit/jdk1.7.0_80-ea-b05 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 8796 lines...] [javac] Compiling 787 source files to /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/java [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/response/SolrQueryResponse.java:26: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletResponse; [javac] ^ [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:20: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletRequest; [javac] ^ [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:142: error: cannot find symbol [javac] private static RTimer getRequestTimer(HttpServletRequest req) [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:152: error: cannot find symbol [javac] public SolrQueryRequest parse( SolrCore core, String path, HttpServletRequest req ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:762: error: cannot find symbol [javac] private static SolrParams autodetect(HttpServletRequest req, ArrayListContentStream streams, FastInputStream in) throws IOException { [javac]^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:444: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception; [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: interface SolrRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:690: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(final HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class StandardRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:543: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class MultipartRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:520: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class RawRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:598: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams, InputStream in) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:639: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:656: error: cannot find symbol [javac] public boolean isFormData(HttpServletRequest req) { [javac]
Re: VOTE: RC0 Release apache-solr-ref-guide-5.1.pdf
Thanks everyone, the vote passes! I'm starting the release now. Cassandra On Mon, Apr 20, 2015 at 10:31 AM, Timothy Potter thelabd...@gmail.com wrote: +1 - looks great! On Mon, Apr 20, 2015 at 8:47 AM, Shalin Shekhar Mangar shalinman...@gmail.com wrote: +1 On Fri, Apr 17, 2015 at 8:04 PM, Cassandra Targett casstarg...@gmail.com wrote: Please vote for the release of the Apache Solr Reference Guide for Solr 5.1. The PDF is available at: https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.1-RC0/ Steve Rowe I made some big changes to the styling of the guide, so please raise any issues you find in your review. Here's my +1. Thanks, Cassandra -- Regards, Shalin Shekhar Mangar. - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6439) Create test-framework/src/test
[ https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler resolved LUCENE-6439. --- Resolution: Fixed Fix Version/s: 5.2 Trunk Assignee: Robert Muir Thanks Robert! Create test-framework/src/test -- Key: LUCENE-6439 URL: https://issues.apache.org/jira/browse/LUCENE-6439 Project: Lucene - Core Issue Type: Test Reporter: Robert Muir Assignee: Robert Muir Fix For: Trunk, 5.2 Attachments: LUCENE-6439-maven.patch, LUCENE-6439.patch We have quite a few tests (~30 suites) for test-framework stuff (test-the-tester) but currently they all sit in lucene/core housed with real tests. I think we should just give test-framework a src/test and move these tests there. This makes the build simpler in the future too, because its less special. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7433) Maven dependency on solr-core 5.1.0 brings in 4.10.3 artifacts
Bryan Bende created SOLR-7433: - Summary: Maven dependency on solr-core 5.1.0 brings in 4.10.3 artifacts Key: SOLR-7433 URL: https://issues.apache.org/jira/browse/SOLR-7433 Project: Solr Issue Type: Bug Affects Versions: 5.1, 5.0 Reporter: Bryan Bende Priority: Minor Adding a Maven dependency on solr-core 5.1.0 brings in some 4.10.3 artifacts. dependency groupIdorg.apache.solr/groupId artifactIdsolr-core/artifactId version5.1.0/version scopetest/scope /dependency Running mvn dependency:tree shows: +- org.apache.solr:solr-core:jar:5.1.0:test [INFO] | +- org.apache.lucene:lucene-analyzers-common:jar:4.10.3:test [INFO] | +- org.apache.lucene:lucene-analyzers-kuromoji:jar:5.1.0:test [INFO] | +- org.apache.lucene:lucene-analyzers-phonetic:jar:5.1.0:test [INFO] | +- org.apache.lucene:lucene-backward-codecs:jar:5.1.0:test [INFO] | +- org.apache.lucene:lucene-codecs:jar:5.1.0:test [INFO] | +- org.apache.lucene:lucene-core:jar:4.10.3:test [INFO] | +- org.apache.lucene:lucene-expressions:jar:5.1.0:test [INFO] | +- org.apache.lucene:lucene-grouping:jar:5.1.0:test [INFO] | +- org.apache.lucene:lucene-highlighter:jar:5.1.0:test [INFO] | +- org.apache.lucene:lucene-join:jar:5.1.0:test [INFO] | +- org.apache.lucene:lucene-memory:jar:5.1.0:test [INFO] | +- org.apache.lucene:lucene-misc:jar:5.1.0:test [INFO] | +- org.apache.lucene:lucene-queries:jar:5.1.0:test [INFO] | +- org.apache.lucene:lucene-queryparser:jar:4.10.3:test Verifying that solr-core came from Maven central: #Thu Apr 16 20:46:02 EDT 2015 solr-core-5.1.0.jarcentral= solr-core-5.1.0.pomcentral= -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80-ea-b05) - Build # 12205 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12205/ Java: 64bit/jdk1.7.0_80-ea-b05 -XX:-UseCompressedOops -XX:+UseSerialGC All tests passed Build Log: [...truncated 8966 lines...] [javac] Compiling 787 source files to /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/java [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/response/SolrQueryResponse.java:26: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletResponse; [javac] ^ [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:20: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletRequest; [javac] ^ [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:142: error: cannot find symbol [javac] private static RTimer getRequestTimer(HttpServletRequest req) [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:152: error: cannot find symbol [javac] public SolrQueryRequest parse( SolrCore core, String path, HttpServletRequest req ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:762: error: cannot find symbol [javac] private static SolrParams autodetect(HttpServletRequest req, ArrayListContentStream streams, FastInputStream in) throws IOException { [javac]^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:444: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception; [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: interface SolrRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:690: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(final HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class StandardRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:543: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class MultipartRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:520: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class RawRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:598: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams, InputStream in) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:639: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:656: error: cannot find symbol [javac] public boolean isFormData(HttpServletRequest req) { [javac] ^
[jira] [Commented] (LUCENE-6439) Create test-framework/src/test
[ https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14503699#comment-14503699 ] ASF subversion and git services commented on LUCENE-6439: - Commit 1674990 from [~thetaphi] in branch 'dev/trunk' [ https://svn.apache.org/r1674990 ] LUCENE-6439: enable support fors test-framework-tests on Maven build Create test-framework/src/test -- Key: LUCENE-6439 URL: https://issues.apache.org/jira/browse/LUCENE-6439 Project: Lucene - Core Issue Type: Test Reporter: Robert Muir Attachments: LUCENE-6439-maven.patch, LUCENE-6439.patch We have quite a few tests (~30 suites) for test-framework stuff (test-the-tester) but currently they all sit in lucene/core housed with real tests. I think we should just give test-framework a src/test and move these tests there. This makes the build simpler in the future too, because its less special. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2157 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2157/ Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC All tests passed Build Log: [...truncated 8980 lines...] [javac] Compiling 787 source files to /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/classes/java [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/response/SolrQueryResponse.java:26: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletResponse; [javac] ^ [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:20: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletRequest; [javac] ^ [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:142: error: cannot find symbol [javac] private static RTimer getRequestTimer(HttpServletRequest req) [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:152: error: cannot find symbol [javac] public SolrQueryRequest parse( SolrCore core, String path, HttpServletRequest req ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:762: error: cannot find symbol [javac] private static SolrParams autodetect(HttpServletRequest req, ArrayListContentStream streams, FastInputStream in) throws IOException { [javac]^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:444: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception; [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: interface SolrRequestParser [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:690: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(final HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class StandardRequestParser [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:543: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class MultipartRequestParser [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:520: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class RawRequestParser [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:598: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams, InputStream in) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:639: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:656: error: cannot find symbol [javac] public boolean isFormData(HttpServletRequest req) { [javac]
[jira] [Commented] (LUCENE-6439) Create test-framework/src/test
[ https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14503736#comment-14503736 ] Uwe Schindler commented on LUCENE-6439: --- The TestMockFileSystems test also leaks threads in old trunk without the changes here, so this is unrelated. Seems to be a maven/surefire problem. Create test-framework/src/test -- Key: LUCENE-6439 URL: https://issues.apache.org/jira/browse/LUCENE-6439 Project: Lucene - Core Issue Type: Test Reporter: Robert Muir Attachments: LUCENE-6439-maven.patch, LUCENE-6439.patch We have quite a few tests (~30 suites) for test-framework stuff (test-the-tester) but currently they all sit in lucene/core housed with real tests. I think we should just give test-framework a src/test and move these tests there. This makes the build simpler in the future too, because its less special. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4050) Solr example fails to start in nightly-smoke
[ https://issues.apache.org/jira/browse/SOLR-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14503854#comment-14503854 ] ASF subversion and git services commented on SOLR-4050: --- Commit 1674998 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1674998 ] SOLR-7429: Remove Solr server module sync-hack introduced in SOLR-4050. Solr example fails to start in nightly-smoke Key: SOLR-4050 URL: https://issues.apache.org/jira/browse/SOLR-4050 Project: Solr Issue Type: Bug Reporter: Michael McCandless Priority: Blocker Fix For: 4.1, Trunk The nightly smoke job is stalled (I'll go kill it shortly): https://builds.apache.org/job/Lucene-Solr-SmokeRelease-4.x/22/console It's stalled when trying to run the Solr example ... the server produced this output: {noformat} java.lang.ClassNotFoundException: org.eclipse.jetty.xml.XmlConfiguration at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) at org.eclipse.jetty.start.Main.invokeMain(Main.java:424) at org.eclipse.jetty.start.Main.start(Main.java:602) at org.eclipse.jetty.start.Main.main(Main.java:82) ClassNotFound: org.eclipse.jetty.xml.XmlConfiguration Usage: java -jar start.jar [options] [properties] [configs] java -jar start.jar --help # for more information {noformat} Seems likely the Jetty upgrade somehow caused this... Separately I committed a fix to smoke tester so that it quickly fails if the Solr example fails to start ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Assigned] (LUCENE-6438) Improve clean-jars when dealing with symbolic links.
[ https://issues.apache.org/jira/browse/LUCENE-6438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller reassigned LUCENE-6438: --- Assignee: Mark Miller Improve clean-jars when dealing with symbolic links. Key: LUCENE-6438 URL: https://issues.apache.org/jira/browse/LUCENE-6438 Project: Lucene - Core Issue Type: Improvement Reporter: Mark Miller Assignee: Mark Miller Ever since I started seeing jars in the lib folders use symbolic links on linux I've run into jar problems when working with an old checkout or switching branches on a git checkout. You would normally expect ant clean-jars to help, but it didn't and led to some headaches and random bs. Turns out, clean-jars is not properly removing all symbolic links for me. I've seen two cases - symbolic links to jars that are not removed and broken symbolic links to jars. I can get rid of the symbolic links with the following: {code} target name=clean-jars description=Remove all JAR files from lib folders in the checkout delete failonerror=true removeNotFollowedSymlinks=true fileset dir=. followsymlinks=false {code} But that doesn't work with the broken links. I guess you can remove those with the Ant Symlink task, but it seems only specifically one at a time which is not that useful. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-7429) Remove Solr server module sync-hack introduced in SOLR-4050.
[ https://issues.apache.org/jira/browse/SOLR-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller resolved SOLR-7429. --- Resolution: Fixed Fix Version/s: 5.2 Trunk Remove Solr server module sync-hack introduced in SOLR-4050. Key: SOLR-7429 URL: https://issues.apache.org/jira/browse/SOLR-7429 Project: Solr Issue Type: Improvement Reporter: Mark Miller Assignee: Mark Miller Fix For: Trunk, 5.2 Attachments: SOLR-7429.patch This is annoying to the beast script I have and for other obvious reasons. We would really like to use sync=true here like everywhere. I'll see what I can do. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7430) Encrypted pptx/xlsx causes a ClassNotFoundException
[ https://issues.apache.org/jira/browse/SOLR-7430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504003#comment-14504003 ] Jon Scharff commented on SOLR-7430: --- I used the data_driven_schema_configs Encrypted pptx/xlsx causes a ClassNotFoundException --- Key: SOLR-7430 URL: https://issues.apache.org/jira/browse/SOLR-7430 Project: Solr Issue Type: Bug Components: contrib - Solr Cell (Tika extraction) Affects Versions: 5.1 Environment: Windows 7 (64bit) jre 1.8.0_40-b26 (64 bit) Reporter: Jon Scharff When indexing an encrypted pptx or xlsx file via the command solr-homejava -Dc=core -Dauto=yes -Ddata=files -jar example\exampledocs\post.jar file.pptx on a server started with solr-homebin\solr start a ClassNotFoundException results instead of a EncryptedDocumentException. It appears that poi is using reflection to get the proper encryption handler, but the necessary jar files are not supplied by jetty's ClassLoader. A portion of the resulting error trace is below. org.apache.solr.common.SolrException: org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from org.apache.tika.parser.microsoft.OfficeParser@2e973e0f at org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:227) ... Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from org.apache.tika.parser.microsoft.OfficeParser@2e973e0f at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:262) at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256) at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120) at org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:221) ... 31 more Caused by: java.io.IOException: java.lang.ClassNotFoundException: org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder at org.apache.poi.poifs.crypt.EncryptionInfo.lt;initgt;(EncryptionInfo.java:69) at org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:228) at org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:172) at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256) ... 34 more Caused by: java.lang.ClassNotFoundException: org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:430) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:383) at org.apache.poi.poifs.crypt.EncryptionInfo.getBuilder(EncryptionInfo.java:150) at org.apache.poi.poifs.crypt.EncryptionInfo.lt;initgt;(EncryptionInfo.java:67) ... 37 more -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6444) ant nightly-smoke fails in trunk
[ https://issues.apache.org/jira/browse/LUCENE-6444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael McCandless resolved LUCENE-6444. Resolution: Fixed Fix Version/s: Trunk ant nightly-smoke fails in trunk Key: LUCENE-6444 URL: https://issues.apache.org/jira/browse/LUCENE-6444 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir Assignee: Michael McCandless Fix For: Trunk I don't know the last time this was run by jenkins, but: {noformat} [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] Releases that don't seem to be tested: [smoker] 4.10.4 {noformat} And i don't see any unsupported-4.10.4-cfs/nocfs.zip in the backwards-codec/ module (to test we do the right thing), so I think the failure is correct. I will fix this a little bit later if nobody beats me to it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6439) Create test-framework/src/test
[ https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14503728#comment-14503728 ] Uwe Schindler commented on LUCENE-6439: --- I have no idea why TestMockFileSystems leaks threads on Maven but not on Ant. Mabye [~steve_rowe] knows better. Create test-framework/src/test -- Key: LUCENE-6439 URL: https://issues.apache.org/jira/browse/LUCENE-6439 Project: Lucene - Core Issue Type: Test Reporter: Robert Muir Attachments: LUCENE-6439-maven.patch, LUCENE-6439.patch We have quite a few tests (~30 suites) for test-framework stuff (test-the-tester) but currently they all sit in lucene/core housed with real tests. I think we should just give test-framework a src/test and move these tests there. This makes the build simpler in the future too, because its less special. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7391) Use a time based expiration cache for one off hdfs FileSystem instances.
[ https://issues.apache.org/jira/browse/SOLR-7391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mark Miller updated SOLR-7391: -- Attachment: SOLR-7391.patch Patch attatched. Use a time based expiration cache for one off hdfs FileSystem instances. Key: SOLR-7391 URL: https://issues.apache.org/jira/browse/SOLR-7391 Project: Solr Issue Type: Improvement Components: hdfs, s Reporter: Mark Miller Assignee: Mark Miller Attachments: SOLR-7391.patch Most FileSystem clients are tied to a SolrCore and long lived, but in some cases where we don't have SolrCore context we create a short lived hdfs client object. Because these instances can be created via user generated actions, we don't want to be able to create too many of them - they have overhead that does not make them great candidates for being spun up for a single call. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7429) Remove Solr server module sync-hack introduced in SOLR-4050.
[ https://issues.apache.org/jira/browse/SOLR-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14503853#comment-14503853 ] ASF subversion and git services commented on SOLR-7429: --- Commit 1674998 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1674998 ] SOLR-7429: Remove Solr server module sync-hack introduced in SOLR-4050. Remove Solr server module sync-hack introduced in SOLR-4050. Key: SOLR-7429 URL: https://issues.apache.org/jira/browse/SOLR-7429 Project: Solr Issue Type: Improvement Reporter: Mark Miller Assignee: Mark Miller Attachments: SOLR-7429.patch This is annoying to the beast script I have and for other obvious reasons. We would really like to use sync=true here like everywhere. I'll see what I can do. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7429) Remove Solr server module sync-hack introduced in SOLR-4050.
[ https://issues.apache.org/jira/browse/SOLR-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14503849#comment-14503849 ] ASF subversion and git services commented on SOLR-7429: --- Commit 1674997 from [~markrmil...@gmail.com] in branch 'dev/trunk' [ https://svn.apache.org/r1674997 ] SOLR-7429: Remove Solr server module sync-hack introduced in SOLR-4050. Remove Solr server module sync-hack introduced in SOLR-4050. Key: SOLR-7429 URL: https://issues.apache.org/jira/browse/SOLR-7429 Project: Solr Issue Type: Improvement Reporter: Mark Miller Assignee: Mark Miller Attachments: SOLR-7429.patch This is annoying to the beast script I have and for other obvious reasons. We would really like to use sync=true here like everywhere. I'll see what I can do. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-4050) Solr example fails to start in nightly-smoke
[ https://issues.apache.org/jira/browse/SOLR-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14503850#comment-14503850 ] ASF subversion and git services commented on SOLR-4050: --- Commit 1674997 from [~markrmil...@gmail.com] in branch 'dev/trunk' [ https://svn.apache.org/r1674997 ] SOLR-7429: Remove Solr server module sync-hack introduced in SOLR-4050. Solr example fails to start in nightly-smoke Key: SOLR-4050 URL: https://issues.apache.org/jira/browse/SOLR-4050 Project: Solr Issue Type: Bug Reporter: Michael McCandless Priority: Blocker Fix For: 4.1, Trunk The nightly smoke job is stalled (I'll go kill it shortly): https://builds.apache.org/job/Lucene-Solr-SmokeRelease-4.x/22/console It's stalled when trying to run the Solr example ... the server produced this output: {noformat} java.lang.ClassNotFoundException: org.eclipse.jetty.xml.XmlConfiguration at java.net.URLClassLoader$1.run(URLClassLoader.java:217) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) at org.eclipse.jetty.start.Main.invokeMain(Main.java:424) at org.eclipse.jetty.start.Main.start(Main.java:602) at org.eclipse.jetty.start.Main.main(Main.java:82) ClassNotFound: org.eclipse.jetty.xml.XmlConfiguration Usage: java -jar start.jar [options] [properties] [configs] java -jar start.jar --help # for more information {noformat} Seems likely the Jetty upgrade somehow caused this... Separately I committed a fix to smoke tester so that it quickly fails if the Solr example fails to start ... -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6439) Create test-framework/src/test
[ https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14503880#comment-14503880 ] Robert Muir commented on LUCENE-6439: - Hi Uwe: on Windows we can't currently instantiate MockWindows. MockWindows does not play along well with the real Windows! to make it work better, we have to emulate windows semantics better so its more correct. The assumeFalse uses Constants.Windows. So maybe something is wrong with this logic in Constants.java? anyway perhaps we should look into it more on a separate issue. Create test-framework/src/test -- Key: LUCENE-6439 URL: https://issues.apache.org/jira/browse/LUCENE-6439 Project: Lucene - Core Issue Type: Test Reporter: Robert Muir Assignee: Robert Muir Fix For: Trunk, 5.2 Attachments: LUCENE-6439-maven.patch, LUCENE-6439.patch We have quite a few tests (~30 suites) for test-framework stuff (test-the-tester) but currently they all sit in lucene/core housed with real tests. I think we should just give test-framework a src/test and move these tests there. This makes the build simpler in the future too, because its less special. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2982 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2982/ All tests passed Build Log: [...truncated 8724 lines...] [javac] Compiling 787 source files to /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/classes/java [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/response/SolrQueryResponse.java:26: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletResponse; [javac] ^ [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:20: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletRequest; [javac] ^ [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:142: error: cannot find symbol [javac] private static RTimer getRequestTimer(HttpServletRequest req) [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:152: error: cannot find symbol [javac] public SolrQueryRequest parse( SolrCore core, String path, HttpServletRequest req ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:762: error: cannot find symbol [javac] private static SolrParams autodetect(HttpServletRequest req, ArrayListContentStream streams, FastInputStream in) throws IOException { [javac]^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:444: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception; [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: interface SolrRequestParser [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:690: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(final HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class StandardRequestParser [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:543: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class MultipartRequestParser [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:520: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class RawRequestParser [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:598: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams, InputStream in) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:639: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac]
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_60-ea-b06) - Build # 12208 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12208/ Java: 32bit/jdk1.8.0_60-ea-b06 -server -XX:+UseSerialGC All tests passed Build Log: [...truncated 8840 lines...] [javac] Compiling 787 source files to /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/java [javac] warning: [options] bootstrap class path not set in conjunction with -source 1.7 [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/response/SolrQueryResponse.java:26: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletResponse; [javac] ^ [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:20: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletRequest; [javac] ^ [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:142: error: cannot find symbol [javac] private static RTimer getRequestTimer(HttpServletRequest req) [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:152: error: cannot find symbol [javac] public SolrQueryRequest parse( SolrCore core, String path, HttpServletRequest req ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:762: error: cannot find symbol [javac] private static SolrParams autodetect(HttpServletRequest req, ArrayListContentStream streams, FastInputStream in) throws IOException { [javac]^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:444: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception; [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: interface SolrRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:690: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(final HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class StandardRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:543: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class MultipartRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:520: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class RawRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:598: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams, InputStream in) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:639: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:656: error: cannot find symbol [javac] public boolean
[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.7.0_76) - Build # 4587 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4587/ Java: 64bit/jdk1.7.0_76 -XX:-UseCompressedOops -XX:+UseSerialGC All tests passed Build Log: [...truncated 8779 lines...] [javac] Compiling 787 source files to C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build\solr-core\classes\java [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\response\SolrQueryResponse.java:26: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletResponse; [javac] ^ [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:20: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletRequest; [javac] ^ [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:142: error: cannot find symbol [javac] private static RTimer getRequestTimer(HttpServletRequest req) [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:152: error: cannot find symbol [javac] public SolrQueryRequest parse( SolrCore core, String path, HttpServletRequest req ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:762: error: cannot find symbol [javac] private static SolrParams autodetect(HttpServletRequest req, ArrayListContentStream streams, FastInputStream in) throws IOException { [javac]^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:444: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception; [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: interface SolrRequestParser [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:690: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(final HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class StandardRequestParser [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:543: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class MultipartRequestParser [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:520: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class RawRequestParser [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:598: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams, InputStream in) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:639: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\core\src\java\org\apache\solr\servlet\SolrRequestParsers.java:656: error: cannot find
[jira] [Commented] (SOLR-7429) Remove Solr server module sync-hack introduced in SOLR-4050.
[ https://issues.apache.org/jira/browse/SOLR-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504367#comment-14504367 ] Shalin Shekhar Mangar commented on SOLR-7429: - I have committed the patch to get the builds to run again. We can revert it if people think that it is not the right fix. Remove Solr server module sync-hack introduced in SOLR-4050. Key: SOLR-7429 URL: https://issues.apache.org/jira/browse/SOLR-7429 Project: Solr Issue Type: Improvement Reporter: Mark Miller Assignee: Mark Miller Fix For: Trunk, 5.2 Attachments: SOLR-7429-fix-servlet-api-deps.patch, SOLR-7429.patch This is annoying to the beast script I have and for other obvious reasons. We would really like to use sync=true here like everywhere. I'll see what I can do. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7429) Remove Solr server module sync-hack introduced in SOLR-4050.
[ https://issues.apache.org/jira/browse/SOLR-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504365#comment-14504365 ] ASF subversion and git services commented on SOLR-7429: --- Commit 1675028 from sha...@apache.org in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1675028 ] SOLR-7429: Switch to standard servlet-api instead of orbit to get 5x build working Remove Solr server module sync-hack introduced in SOLR-4050. Key: SOLR-7429 URL: https://issues.apache.org/jira/browse/SOLR-7429 Project: Solr Issue Type: Improvement Reporter: Mark Miller Assignee: Mark Miller Fix For: Trunk, 5.2 Attachments: SOLR-7429-fix-servlet-api-deps.patch, SOLR-7429.patch This is annoying to the beast script I have and for other obvious reasons. We would really like to use sync=true here like everywhere. I'll see what I can do. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6442) Add a mockfs with unpredictable but deterministic file listing order
[ https://issues.apache.org/jira/browse/LUCENE-6442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir updated LUCENE-6442: Attachment: LUCENE-6442.patch Here is a patch. the new shuffling one is tiny, but i refactored and added more tests for the mockfs things. I also cleaned up the extras to be more clear, like Dawid had suggested (LUCENE-6434). Add a mockfs with unpredictable but deterministic file listing order Key: LUCENE-6442 URL: https://issues.apache.org/jira/browse/LUCENE-6442 Project: Lucene - Core Issue Type: Task Reporter: Robert Muir Attachments: LUCENE-6442.patch Any test that processes with directory listing apis (Directory.listAll(), DirectoryStream, walkFileTree, etc) and does not sort the results can cause reproducibility difficulties, because it might e.g. consume from random() in a different order and so on. We can instead sort and shuffle in a predictable way per-class based on the random seed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_60-ea-b06) - Build # 12210 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12210/ Java: 64bit/jdk1.8.0_60-ea-b06 -XX:-UseCompressedOops -XX:+UseG1GC All tests passed Build Log: [...truncated 8829 lines...] [javac] Compiling 787 source files to /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/java [javac] warning: [options] bootstrap class path not set in conjunction with -source 1.7 [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/response/SolrQueryResponse.java:26: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletResponse; [javac] ^ [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:20: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletRequest; [javac] ^ [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:142: error: cannot find symbol [javac] private static RTimer getRequestTimer(HttpServletRequest req) [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:152: error: cannot find symbol [javac] public SolrQueryRequest parse( SolrCore core, String path, HttpServletRequest req ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:762: error: cannot find symbol [javac] private static SolrParams autodetect(HttpServletRequest req, ArrayListContentStream streams, FastInputStream in) throws IOException { [javac]^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:444: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception; [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: interface SolrRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:690: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(final HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class StandardRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:543: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class MultipartRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:520: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class RawRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:598: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams, InputStream in) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:639: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:656: error: cannot find symbol [javac] public
[jira] [Commented] (SOLR-6665) ZkController.publishAndWaitForDownStates should not use core name
[ https://issues.apache.org/jira/browse/SOLR-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504386#comment-14504386 ] ASF subversion and git services commented on SOLR-6665: --- Commit 1675030 from sha...@apache.org in branch 'dev/trunk' [ https://svn.apache.org/r1675030 ] SOLR-6665: ZkController.publishAndWaitForDownStates can return before all local cores are marked as 'down' if multiple replicas with the same core name exist in the cluster ZkController.publishAndWaitForDownStates should not use core name - Key: SOLR-6665 URL: https://issues.apache.org/jira/browse/SOLR-6665 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.1 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Priority: Minor Fix For: 5.0, Trunk Attachments: SOLR-6665.patch ZkController.publishAndWaitForDownStates uses a ListString to keep track of all core names that have been published as down. It should use a set of coreNodeNames instead of core names for correctness. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON
[ https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504203#comment-14504203 ] Noble Paul commented on SOLR-7176: -- optimistic locking is fine. But you can do a retry loop if an attempt to write fails allow zkcli to modify JSON -- Key: SOLR-7176 URL: https://issues.apache.org/jira/browse/SOLR-7176 Project: Solr Issue Type: New Feature Reporter: Yonik Seeley Assignee: Noble Paul Priority: Minor Attachments: SOLR-7176.patch, SOLR-7176.patch, SOLR-7176.patch To enable SSL, we have instructions like the following: {code} server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put /clusterprops.json '{urlScheme:https}' {code} Overwriting the value won't work well when we have more properties to put in clusterprops. We should be able to change individual values or perhaps merge values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6442) Add a mockfs with unpredictable but deterministic file listing order
[ https://issues.apache.org/jira/browse/LUCENE-6442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504274#comment-14504274 ] Ryan Ernst commented on LUCENE-6442: +1, the mockfs test improvements are great. Add a mockfs with unpredictable but deterministic file listing order Key: LUCENE-6442 URL: https://issues.apache.org/jira/browse/LUCENE-6442 Project: Lucene - Core Issue Type: Task Reporter: Robert Muir Attachments: LUCENE-6442.patch Any test that processes with directory listing apis (Directory.listAll(), DirectoryStream, walkFileTree, etc) and does not sort the results can cause reproducibility difficulties, because it might e.g. consume from random() in a different order and so on. We can instead sort and shuffle in a predictable way per-class based on the random seed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON
[ https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504356#comment-14504356 ] Hrishikesh Gadre commented on SOLR-7176: Are you suggesting that CLI should run a retry loop? I would prefer not to retry automatically since it could overwrite some other directive of some other CLI invocation. In my current patch, I have added a check to see if the value is set correctly (after receiving an error). If it is, then the invocation is successful. If not, we fail the invocation. This also works in scenarios where two invocations attempt to set the same value. Adding retries would also raise questions like - how many retries are sufficient? Is there an exponential backoff policy? What is the time interval between two retries etc. All of this can be implemented outside Solr very easily if required (e.g. a script can run the zkcli command in a loop until it succeeds). allow zkcli to modify JSON -- Key: SOLR-7176 URL: https://issues.apache.org/jira/browse/SOLR-7176 Project: Solr Issue Type: New Feature Reporter: Yonik Seeley Assignee: Noble Paul Priority: Minor Attachments: SOLR-7176.patch, SOLR-7176.patch, SOLR-7176.patch To enable SSL, we have instructions like the following: {code} server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put /clusterprops.json '{urlScheme:https}' {code} Overwriting the value won't work well when we have more properties to put in clusterprops. We should be able to change individual values or perhaps merge values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7429) Remove Solr server module sync-hack introduced in SOLR-4050.
[ https://issues.apache.org/jira/browse/SOLR-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-7429: Attachment: SOLR-7429-fix-servlet-api-deps.patch This patch fixes it for me. We were using the servlet-api jar from jetty orbit project in 5x. I'm not sure why that was chosen but this patch replaces it with the standard servlet-api jar in solr/server/ivy.xml and lucene/replicator/ivy.xml Remove Solr server module sync-hack introduced in SOLR-4050. Key: SOLR-7429 URL: https://issues.apache.org/jira/browse/SOLR-7429 Project: Solr Issue Type: Improvement Reporter: Mark Miller Assignee: Mark Miller Fix For: Trunk, 5.2 Attachments: SOLR-7429-fix-servlet-api-deps.patch, SOLR-7429.patch This is annoying to the beast script I have and for other obvious reasons. We would really like to use sync=true here like everywhere. I'll see what I can do. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b54) - Build # 12207 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12207/ Java: 64bit/jdk1.9.0-ea-b54 -XX:-UseCompressedOops -XX:+UseParallelGC All tests passed Build Log: [...truncated 8840 lines...] [javac] Compiling 787 source files to /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/java [javac] warning: [options] bootstrap class path not set in conjunction with -source 1.7 [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:20: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletRequest; [javac] ^ [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:444: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception; [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: interface SolrRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:543: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class MultipartRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:520: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class RawRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:598: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams, InputStream in) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:639: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:656: error: cannot find symbol [javac] public boolean isFormData(HttpServletRequest req) { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:690: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(final HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class StandardRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:142: error: cannot find symbol [javac] private static RTimer getRequestTimer(HttpServletRequest req) [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:152: error: cannot find symbol [javac] public SolrQueryRequest parse( SolrCore core, String path, HttpServletRequest req ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:762: error: cannot find symbol [javac] private static SolrParams autodetect(HttpServletRequest req, ArrayListContentStream streams, FastInputStream in) throws IOException { [javac]^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac]
[jira] [Commented] (LUCENE-6392) Add offset limit to Highlighter's TokenStreamFromTermVector
[ https://issues.apache.org/jira/browse/LUCENE-6392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504307#comment-14504307 ] David Smiley commented on LUCENE-6392: -- I created LUCENE-6445 to for a broader refactor/simplification of TokenSources.java. I don't think this issue here should bother with modifications to that class; it can be limited to TokenStreamFromTermVector. I plan to commit this issue in ~24 hours without any TokenSources modifications. Add offset limit to Highlighter's TokenStreamFromTermVector --- Key: LUCENE-6392 URL: https://issues.apache.org/jira/browse/LUCENE-6392 Project: Lucene - Core Issue Type: Improvement Components: modules/highlighter Reporter: David Smiley Assignee: David Smiley Fix For: 5.2 Attachments: LUCENE-6392_highlight_term_vector_maxStartOffset.patch The Highlighter's TokenStreamFromTermVector utility, typically accessed via TokenSources, should have the ability to filter out tokens beyond a configured offset. There is a TODO there already, and this issue addresses it. New methods in TokenSources now propagate a limit. This patch also includes some memory saving optimizations, to be described shortly. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2983 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2983/ All tests passed Build Log: [...truncated 8723 lines...] [javac] Compiling 787 source files to /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/classes/java [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/response/SolrQueryResponse.java:26: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletResponse; [javac] ^ [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:20: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletRequest; [javac] ^ [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:142: error: cannot find symbol [javac] private static RTimer getRequestTimer(HttpServletRequest req) [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:152: error: cannot find symbol [javac] public SolrQueryRequest parse( SolrCore core, String path, HttpServletRequest req ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:762: error: cannot find symbol [javac] private static SolrParams autodetect(HttpServletRequest req, ArrayListContentStream streams, FastInputStream in) throws IOException { [javac]^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:444: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception; [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: interface SolrRequestParser [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:690: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(final HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class StandardRequestParser [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:543: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class MultipartRequestParser [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:520: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class RawRequestParser [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:598: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams, InputStream in) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:639: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac]
[jira] [Updated] (SOLR-6665) ZkController.publishAndWaitForDownStates should not use core name
[ https://issues.apache.org/jira/browse/SOLR-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-6665: Affects Version/s: 4.10.4 5.1 Fix Version/s: (was: 5.0) 5.2 ZkController.publishAndWaitForDownStates should not use core name - Key: SOLR-6665 URL: https://issues.apache.org/jira/browse/SOLR-6665 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.1, 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Priority: Minor Fix For: Trunk, 5.2 Attachments: SOLR-6665.patch ZkController.publishAndWaitForDownStates uses a ListString to keep track of all core names that have been published as down. It should use a set of coreNodeNames instead of core names for correctness. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6665) ZkController.publishAndWaitForDownStates should not use core name
[ https://issues.apache.org/jira/browse/SOLR-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504397#comment-14504397 ] ASF subversion and git services commented on SOLR-6665: --- Commit 1675033 from sha...@apache.org in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1675033 ] SOLR-6665: ZkController.publishAndWaitForDownStates can return before all local cores are marked as 'down' if multiple replicas with the same core name exist in the cluster ZkController.publishAndWaitForDownStates should not use core name - Key: SOLR-6665 URL: https://issues.apache.org/jira/browse/SOLR-6665 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.1, 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Priority: Minor Fix For: Trunk, 5.2 Attachments: SOLR-6665.patch ZkController.publishAndWaitForDownStates uses a ListString to keep track of all core names that have been published as down. It should use a set of coreNodeNames instead of core names for correctness. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Reopened] (SOLR-7429) Remove Solr server module sync-hack introduced in SOLR-4050.
[ https://issues.apache.org/jira/browse/SOLR-7429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar reopened SOLR-7429: - This has broken the branch_5x build. It cannot find classes from the servlet-api anymore. Remove Solr server module sync-hack introduced in SOLR-4050. Key: SOLR-7429 URL: https://issues.apache.org/jira/browse/SOLR-7429 Project: Solr Issue Type: Improvement Reporter: Mark Miller Assignee: Mark Miller Fix For: Trunk, 5.2 Attachments: SOLR-7429.patch This is annoying to the beast script I have and for other obvious reasons. We would really like to use sync=true here like everywhere. I'll see what I can do. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2158 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2158/ Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC All tests passed Build Log: [...truncated 8741 lines...] [javac] Compiling 787 source files to /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/build/solr-core/classes/java [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/response/SolrQueryResponse.java:26: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletResponse; [javac] ^ [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:20: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletRequest; [javac] ^ [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:142: error: cannot find symbol [javac] private static RTimer getRequestTimer(HttpServletRequest req) [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:152: error: cannot find symbol [javac] public SolrQueryRequest parse( SolrCore core, String path, HttpServletRequest req ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:762: error: cannot find symbol [javac] private static SolrParams autodetect(HttpServletRequest req, ArrayListContentStream streams, FastInputStream in) throws IOException { [javac]^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:444: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception; [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: interface SolrRequestParser [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:690: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(final HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class StandardRequestParser [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:543: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class MultipartRequestParser [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:520: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class RawRequestParser [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:598: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams, InputStream in) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:639: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /Users/jenkins/workspace/Lucene-Solr-5.x-MacOSX/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:656: error: cannot find symbol [javac] public boolean isFormData(HttpServletRequest req) { [javac]
[jira] [Resolved] (SOLR-6665) ZkController.publishAndWaitForDownStates should not use core name
[ https://issues.apache.org/jira/browse/SOLR-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-6665. - Resolution: Fixed This is fixed. I had to change the ZkControllerTest in branch_5x to make it compliant with Java7. ZkController.publishAndWaitForDownStates should not use core name - Key: SOLR-6665 URL: https://issues.apache.org/jira/browse/SOLR-6665 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.1, 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Priority: Minor Fix For: Trunk, 5.2 Attachments: SOLR-6665.patch ZkController.publishAndWaitForDownStates uses a ListString to keep track of all core names that have been published as down. It should use a set of coreNodeNames instead of core names for correctness. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b54) - Build # 12209 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12209/ Java: 32bit/jdk1.9.0-ea-b54 -client -XX:+UseConcMarkSweepGC All tests passed Build Log: [...truncated 8840 lines...] [javac] Compiling 787 source files to /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/java [javac] warning: [options] bootstrap class path not set in conjunction with -source 1.7 [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:20: error: package javax.servlet.http does not exist [javac] import javax.servlet.http.HttpServletRequest; [javac] ^ [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:444: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception; [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: interface SolrRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:543: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class MultipartRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:520: error: cannot find symbol [javac] final HttpServletRequest req, ArrayListContentStream streams ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class RawRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:598: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams, InputStream in) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:639: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:656: error: cannot find symbol [javac] public boolean isFormData(HttpServletRequest req) { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class FormDataRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:690: error: cannot find symbol [javac] public SolrParams parseParamsAndFillStreams(final HttpServletRequest req, ArrayListContentStream streams ) throws Exception { [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class StandardRequestParser [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:142: error: cannot find symbol [javac] private static RTimer getRequestTimer(HttpServletRequest req) [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:152: error: cannot find symbol [javac] public SolrQueryRequest parse( SolrCore core, String path, HttpServletRequest req ) throws Exception [javac] ^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac] /home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/java/org/apache/solr/servlet/SolrRequestParsers.java:762: error: cannot find symbol [javac] private static SolrParams autodetect(HttpServletRequest req, ArrayListContentStream streams, FastInputStream in) throws IOException { [javac]^ [javac] symbol: class HttpServletRequest [javac] location: class SolrRequestParsers [javac]
[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON
[ https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14503533#comment-14503533 ] Hrishikesh Gadre commented on SOLR-7176: I believe I would prefer 1) because it is the most generally useable solution to the problem. Compare-and-swap (even combined with ZK multi-op feature) will not always be sufficient for operations that want to update several znodes atomically - and who knows, maybe some day we also want to that kind of stuff using command-line. Taking a pessimistic lock (like the Overseer-lock) always will be sufficient. The original use-case for this feature is to have an ability to update the cluster properties even when Solr cluster is offline. Hence the fix for this use-case can not really depend upon the overseer lock. Also as others mentioned in the JIRA above, we are trying to address a very specific problem (i.e. ability to update contents of /clusterprops.json ZNODE). Typically these updates should be very infrequent (e.g. why would user flip between SSL/non-SSL mode frequently ?). So I believe using optimistic locking should be fine. Thoughts? allow zkcli to modify JSON -- Key: SOLR-7176 URL: https://issues.apache.org/jira/browse/SOLR-7176 Project: Solr Issue Type: New Feature Reporter: Yonik Seeley Assignee: Noble Paul Priority: Minor Attachments: SOLR-7176.patch, SOLR-7176.patch, SOLR-7176.patch To enable SSL, we have instructions like the following: {code} server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put /clusterprops.json '{urlScheme:https}' {code} Overwriting the value won't work well when we have more properties to put in clusterprops. We should be able to change individual values or perhaps merge values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6420) Update forbiddenapis to 1.8
[ https://issues.apache.org/jira/browse/LUCENE-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502769#comment-14502769 ] Steve Rowe commented on LUCENE-6420: bq. I think its ready. Steve Rowe if you could have a look. +1, LGTM, {{ant clean-maven-build get-maven-poms cd maven-build mvn -DskipTests install}} succeeds. Thanks Uwe! Update forbiddenapis to 1.8 --- Key: LUCENE-6420 URL: https://issues.apache.org/jira/browse/LUCENE-6420 Project: Lucene - Core Issue Type: Improvement Components: general/build Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: Trunk, 5.2 Attachments: LUCENE-6420-anno.patch, LUCENE-6420-anno.patch, LUCENE-6420-anno.patch, LUCENE-6420-anno.patch, LUCENE-6420.patch Update forbidden-apis plugin to 1.8: - Initial support for Java 9 including JIGSAW - Errors are now reported sorted by line numbers and correctly grouped (synthetic methods/lambdas) - Package-level forbids: Deny all classes from a package: org.hatedpkg.** (also other globs work) - In addition to file-level excludes, forbiddenapis now supports fine granular excludes using Java annotations. You can use the one shipped, but define your own, e.g. inside Lucene and pass its name to forbidden (e.g. using a glob: **.SuppressForbidden would any annotation in any package to suppress errors). Annotation need to be on class level, no runtime annotation required. This will for now only update the dependency and remove the additional forbid by [~shalinmangar] for MessageFormat (which is now shipped with forbidden). But we should review and for example suppress forbidden failures in command line tools using @SuppressForbidden (or similar annotation). The discussion is open, I can make a patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7419) Initial value of thread local in SolrQueryTimeoutImpl overflows a long
[ https://issues.apache.org/jira/browse/SOLR-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502770#comment-14502770 ] Shalin Shekhar Mangar commented on SOLR-7419: - I see. Can you please put a comment to that effect in the code and close this issue? Initial value of thread local in SolrQueryTimeoutImpl overflows a long -- Key: SOLR-7419 URL: https://issues.apache.org/jira/browse/SOLR-7419 Project: Solr Issue Type: Bug Components: search Affects Versions: 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Priority: Trivial Fix For: Trunk, 5.2 Same as the title. {code} /** * The ThreadLocal variable to store the time beyond which, the processing should exit. */ public static ThreadLocalLong timeoutAt = new ThreadLocalLong() { @Override protected Long initialValue() { return nanoTime() + Long.MAX_VALUE; } }; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Closed] (SOLR-6583) Resuming connection with ZooKeeper causes log replay
[ https://issues.apache.org/jira/browse/SOLR-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar closed SOLR-6583. --- Resuming connection with ZooKeeper causes log replay Key: SOLR-6583 URL: https://issues.apache.org/jira/browse/SOLR-6583 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.1 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Priority: Minor Fix For: Trunk, 5.1 If a node is partitioned from ZooKeeper for an extended period of time then upon resuming connection, the node re-registers itself causing recoverFromLog() method to be executed which fails with the following exception: {code} 8091124 [Thread-71] ERROR org.apache.solr.update.UpdateLog – Error inspecting tlog tlog{file=/home/ubuntu/shalin-lusolr/solr/example/solr/collection_5x3_shard5_replica3/data/tlog/tlog.0009869 refcount=2} java.nio.channels.ClosedChannelException at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:99) at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:678) at org.apache.solr.update.ChannelFastInputStream.readWrappedStream(TransactionLog.java:784) at org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:89) at org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:125) at java.io.InputStream.read(InputStream.java:101) at org.apache.solr.update.TransactionLog.endsWithCommit(TransactionLog.java:218) at org.apache.solr.update.UpdateLog.recoverFromLog(UpdateLog.java:800) at org.apache.solr.cloud.ZkController.register(ZkController.java:834) at org.apache.solr.cloud.ZkController$1.command(ZkController.java:271) at org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166) 8091125 [Thread-71] ERROR org.apache.solr.update.UpdateLog – Error inspecting tlog tlog{file=/home/ubuntu/shalin-lusolr/solr/example/solr/collection_5x3_shard5_replica3/data/tlog/tlog.0009870 refcount=2} java.nio.channels.ClosedChannelException at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:99) at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:678) at org.apache.solr.update.ChannelFastInputStream.readWrappedStream(TransactionLog.java:784) at org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:89) at org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:125) at java.io.InputStream.read(InputStream.java:101) at org.apache.solr.update.TransactionLog.endsWithCommit(TransactionLog.java:218) at org.apache.solr.update.UpdateLog.recoverFromLog(UpdateLog.java:800) at org.apache.solr.cloud.ZkController.register(ZkController.java:834) at org.apache.solr.cloud.ZkController$1.command(ZkController.java:271) at org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166) {code} This is because the recoverFromLog uses transaction log references that were collected at startup and are no longer valid. We shouldn't even be running recoverFromLog code for ZK re-connect. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2979 - Still Failing
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2979/ 1 tests failed. REGRESSION: org.apache.solr.cloud.HttpPartitionTest.test Error Message: Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! ClusterState: { c8n_1x2:{ autoAddReplicas:false, router:{name:compositeId}, shards:{shard1:{ range:8000-7fff, state:active, replicas:{ core_node1:{ core:c8n_1x2_shard1_replica1, node_name:127.0.0.1:64899_ucqpx, state:recovering, base_url:http://127.0.0.1:64899/ucqpx}, core_node2:{ core:c8n_1x2_shard1_replica2, node_name:127.0.0.1:64874_ucqpx, state:active, base_url:http://127.0.0.1:64874/ucqpx;, leader:true, maxShardsPerNode:1, replicationFactor:2}, collection1:{ autoAddReplicas:false, autoCreated:true, router:{name:compositeId}, shards:{ shard1:{ range:8000-, state:active, replicas:{core_node2:{ core:collection1, node_name:127.0.0.1:64899_ucqpx, state:active, base_url:http://127.0.0.1:64899/ucqpx;, leader:true}}}, shard2:{ range:0-7fff, state:active, replicas:{ core_node1:{ core:collection1, node_name:127.0.0.1:64883_ucqpx, state:active, base_url:http://127.0.0.1:64883/ucqpx;, leader:true}, core_node3:{ core:collection1, node_name:127.0.0.1:64908_ucqpx, state:active, base_url:http://127.0.0.1:64908/ucqpx, maxShardsPerNode:1, replicationFactor:1}, control_collection:{ autoAddReplicas:false, autoCreated:true, router:{name:compositeId}, shards:{shard1:{ range:8000-7fff, state:active, replicas:{core_node1:{ core:collection1, node_name:127.0.0.1:64874_ucqpx, state:active, base_url:http://127.0.0.1:64874/ucqpx;, leader:true, maxShardsPerNode:1, replicationFactor:1}} Stack Trace: java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! ClusterState: { c8n_1x2:{ autoAddReplicas:false, router:{name:compositeId}, shards:{shard1:{ range:8000-7fff, state:active, replicas:{ core_node1:{ core:c8n_1x2_shard1_replica1, node_name:127.0.0.1:64899_ucqpx, state:recovering, base_url:http://127.0.0.1:64899/ucqpx}, core_node2:{ core:c8n_1x2_shard1_replica2, node_name:127.0.0.1:64874_ucqpx, state:active, base_url:http://127.0.0.1:64874/ucqpx;, leader:true, maxShardsPerNode:1, replicationFactor:2}, collection1:{ autoAddReplicas:false, autoCreated:true, router:{name:compositeId}, shards:{ shard1:{ range:8000-, state:active, replicas:{core_node2:{ core:collection1, node_name:127.0.0.1:64899_ucqpx, state:active, base_url:http://127.0.0.1:64899/ucqpx;, leader:true}}}, shard2:{ range:0-7fff, state:active, replicas:{ core_node1:{ core:collection1, node_name:127.0.0.1:64883_ucqpx, state:active, base_url:http://127.0.0.1:64883/ucqpx;, leader:true}, core_node3:{ core:collection1, node_name:127.0.0.1:64908_ucqpx, state:active, base_url:http://127.0.0.1:64908/ucqpx, maxShardsPerNode:1, replicationFactor:1}, control_collection:{ autoAddReplicas:false, autoCreated:true, router:{name:compositeId}, shards:{shard1:{ range:8000-7fff, state:active, replicas:{core_node1:{ core:collection1, node_name:127.0.0.1:64874_ucqpx, state:active, base_url:http://127.0.0.1:64874/ucqpx;, leader:true, maxShardsPerNode:1, replicationFactor:1}} at __randomizedtesting.SeedInfo.seed([4146C2AFF081C3D6:C912FD755E7DAE2E]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.AbstractFullDistribZkTestBase.ensureAllReplicasAreActive(AbstractFullDistribZkTestBase.java:1920) at org.apache.solr.cloud.HttpPartitionTest.testRf2(HttpPartitionTest.java:237) at org.apache.solr.cloud.HttpPartitionTest.test(HttpPartitionTest.java:105) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at
[jira] [Commented] (SOLR-7419) Initial value of thread local in SolrQueryTimeoutImpl overflows a long
[ https://issues.apache.org/jira/browse/SOLR-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502774#comment-14502774 ] Steve Rowe commented on SOLR-7419: -- bq. Can you please put a comment to that effect in the code and close this issue? Will do. Thanks for bringing it up. Initial value of thread local in SolrQueryTimeoutImpl overflows a long -- Key: SOLR-7419 URL: https://issues.apache.org/jira/browse/SOLR-7419 Project: Solr Issue Type: Bug Components: search Affects Versions: 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Priority: Trivial Fix For: Trunk, 5.2 Same as the title. {code} /** * The ThreadLocal variable to store the time beyond which, the processing should exit. */ public static ThreadLocalLong timeoutAt = new ThreadLocalLong() { @Override protected Long initialValue() { return nanoTime() + Long.MAX_VALUE; } }; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7430) Encrypted pptx/xlsx causes a ClassNotFoundException
[ https://issues.apache.org/jira/browse/SOLR-7430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502682#comment-14502682 ] Jan Høydahl commented on SOLR-7430: --- How did you create your core? With what config? Encrypted pptx/xlsx causes a ClassNotFoundException --- Key: SOLR-7430 URL: https://issues.apache.org/jira/browse/SOLR-7430 Project: Solr Issue Type: Bug Components: contrib - Solr Cell (Tika extraction) Affects Versions: 5.1 Environment: Windows 7 (64bit) jre 1.8.0_40-b26 (64 bit) Reporter: Jon Scharff When indexing an encrypted pptx or xlsx file via the command solr-homejava -Dc=core -Dauto=yes -Ddata=files -jar example\exampledocs\post.jar file.pptx on a server started with solr-homebin\solr start a ClassNotFoundException results instead of a EncryptedDocumentException. It appears that poi is using reflection to get the proper encryption handler, but the necessary jar files are not supplied by jetty's ClassLoader. A portion of the resulting error trace is below. org.apache.solr.common.SolrException: org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from org.apache.tika.parser.microsoft.OfficeParser@2e973e0f at org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:227) ... Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from org.apache.tika.parser.microsoft.OfficeParser@2e973e0f at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:262) at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256) at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120) at org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:221) ... 31 more Caused by: java.io.IOException: java.lang.ClassNotFoundException: org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder at org.apache.poi.poifs.crypt.EncryptionInfo.lt;initgt;(EncryptionInfo.java:69) at org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:228) at org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:172) at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256) ... 34 more Caused by: java.lang.ClassNotFoundException: org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:430) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:383) at org.apache.poi.poifs.crypt.EncryptionInfo.getBuilder(EncryptionInfo.java:150) at org.apache.poi.poifs.crypt.EncryptionInfo.lt;initgt;(EncryptionInfo.java:67) ... 37 more -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-6583) Resuming connection with ZooKeeper causes log replay
[ https://issues.apache.org/jira/browse/SOLR-6583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar resolved SOLR-6583. - Resolution: Fixed Fix Version/s: (was: 5.0) 5.1 This was fixed by SOLR-7338 Resuming connection with ZooKeeper causes log replay Key: SOLR-6583 URL: https://issues.apache.org/jira/browse/SOLR-6583 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.1 Reporter: Shalin Shekhar Mangar Assignee: Shalin Shekhar Mangar Priority: Minor Fix For: Trunk, 5.1 If a node is partitioned from ZooKeeper for an extended period of time then upon resuming connection, the node re-registers itself causing recoverFromLog() method to be executed which fails with the following exception: {code} 8091124 [Thread-71] ERROR org.apache.solr.update.UpdateLog – Error inspecting tlog tlog{file=/home/ubuntu/shalin-lusolr/solr/example/solr/collection_5x3_shard5_replica3/data/tlog/tlog.0009869 refcount=2} java.nio.channels.ClosedChannelException at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:99) at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:678) at org.apache.solr.update.ChannelFastInputStream.readWrappedStream(TransactionLog.java:784) at org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:89) at org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:125) at java.io.InputStream.read(InputStream.java:101) at org.apache.solr.update.TransactionLog.endsWithCommit(TransactionLog.java:218) at org.apache.solr.update.UpdateLog.recoverFromLog(UpdateLog.java:800) at org.apache.solr.cloud.ZkController.register(ZkController.java:834) at org.apache.solr.cloud.ZkController$1.command(ZkController.java:271) at org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166) 8091125 [Thread-71] ERROR org.apache.solr.update.UpdateLog – Error inspecting tlog tlog{file=/home/ubuntu/shalin-lusolr/solr/example/solr/collection_5x3_shard5_replica3/data/tlog/tlog.0009870 refcount=2} java.nio.channels.ClosedChannelException at sun.nio.ch.FileChannelImpl.ensureOpen(FileChannelImpl.java:99) at sun.nio.ch.FileChannelImpl.read(FileChannelImpl.java:678) at org.apache.solr.update.ChannelFastInputStream.readWrappedStream(TransactionLog.java:784) at org.apache.solr.common.util.FastInputStream.refill(FastInputStream.java:89) at org.apache.solr.common.util.FastInputStream.read(FastInputStream.java:125) at java.io.InputStream.read(InputStream.java:101) at org.apache.solr.update.TransactionLog.endsWithCommit(TransactionLog.java:218) at org.apache.solr.update.UpdateLog.recoverFromLog(UpdateLog.java:800) at org.apache.solr.cloud.ZkController.register(ZkController.java:834) at org.apache.solr.cloud.ZkController$1.command(ZkController.java:271) at org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166) {code} This is because the recoverFromLog uses transaction log references that were collected at startup and are no longer valid. We shouldn't even be running recoverFromLog code for ZK re-connect. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6220) Replica placement strategy for solrcloud
[ https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-6220: - Attachment: SOLR-6220.patch More tests . Replica placement strategy for solrcloud Key: SOLR-6220 URL: https://issues.apache.org/jira/browse/SOLR-6220 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-6220.patch, SOLR-6220.patch, SOLR-6220.patch h1.Objective Most cloud based systems allow to specify rules on how the replicas/nodes of a cluster are allocated . Solr should have a flexible mechanism through which we should be able to control allocation of replicas or later change it to suit the needs of the system All configurations are per collection basis. The rules are applied whenever a replica is created in any of the shards in a given collection during * collection creation * shard splitting * add replica * createsshard There are two aspects to how replicas are placed: snitch and placement. h2.snitch How to identify the tags of nodes. Snitches are configured through collection create command with the snitch prefix . eg: snitch.type=EC2Snitch. The system provides the following implicit tag names which cannot be used by other snitches * node : The solr nodename * host : The hostname * ip : The ip address of the host * cores : This is a dynamic varibale which gives the core count at any given point * disk : This is a dynamic variable which gives the available disk space at any given point There will a few snitches provided by the system such as h3.EC2Snitch Provides two tags called dc, rack from the region and zone values in EC2 h3.IPSnitch Use the IP to infer the “dc” and “rack” values h3.NodePropertySnitch This lets users provide system properties to each node with tagname and value . example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this particular node will have two tags “tag-x” and “tag-y” . h3.RestSnitch Which lets the user configure a url which the server can invoke and get all the tags for a given node. This takes extra parameters in create command example: {{snitch={type=RestSnitch,url=http://snitchserverhost:port/[node]}} The response of the rest call {{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}} must be in json format eg: {code:JavaScript} { “tag-x”:”x-val”, “tag-y”:”y-val” } {code} h3.ManagedSnitch This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The user should be able to manage the tags and values of each node through a collection API h2.Rules This tells how many replicas for a given shard needs to be assigned to nodes with the given key value pairs. These parameters will be passed on to the collection CREATE api as a multivalued parameter rule . The values will be saved in the state of the collection as follows {code:Javascript} { “mycollection”:{ “snitch”: { type:“EC2Snitch” } “rules”:[ {“shard”: “value1”, “replica”: “value2”, tag1:val1}, {“shard”: “value1”, “replica”: “value2”, tag2:val2} ] } {code} A rule is specified as a pseudo JSON syntax . which is a map of keys and values *Each collection can have any number of rules. As long as the rules do not conflict with each other it should be OK. Or else an error is thrown * In each rule , shard and replica can be omitted ** default value of replica is {{\*}} means ANY or you can specify a count and an operand such as {{+}} or {{-}} ** and the value of shard can be a shard name or {{\*}} means EACH or {{**}} means ANY. default value is {{\*\*}} (ANY) * There should be exactly one extra condition in a rule other than {{shard}} and {{replica}}. * all keys other than {{shard}} and {{replica}} are called tags and the tags are nothing but values provided by the snitch for each node * By default certain tags such as {{node}}, {{host}}, {{port}} are provided by the system implicitly Examples: {noformat} //in each rack there can be max two replicas of A given shard {rack:*,shard:*,replica:2-} //in each rack there can be max two replicas of ANY replica {rack:*,shard:**,replica:2-} {rack:*,replica:2-} //in each node there should be a max one replica of EACH shard {node:*,shard:*,replica:1-} //in each node there should be a max one replica of ANY shard {node:*,shard:**,replica:1-} {node:*,replica:1-} //In rack 738 and shard=shard1, there can be a max 0 replica {rack:738,shard:shard1,replica:0-} //All replicas of shard1 should go to rack 730 {shard:shard1,replica:*,rack:730} {shard:shard1,rack:730} // all replicas must be created in a node with at least 20GB disk {replica:*,shard:*,disk:20+}
[jira] [Commented] (SOLR-7419) Initial value of thread local in SolrQueryTimeoutImpl overflows a long
[ https://issues.apache.org/jira/browse/SOLR-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502744#comment-14502744 ] Yonik Seeley commented on SOLR-7419: Heh, yeah, this doesn't result in bad behavior since the original overflow is paired with an underflow. But if something like that is intentional it should certainly be commented. In this case, we should just set the initial value to Long.MAX_VALUE Initial value of thread local in SolrQueryTimeoutImpl overflows a long -- Key: SOLR-7419 URL: https://issues.apache.org/jira/browse/SOLR-7419 Project: Solr Issue Type: Bug Components: search Affects Versions: 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Priority: Trivial Fix For: Trunk, 5.2 Same as the title. {code} /** * The ThreadLocal variable to store the time beyond which, the processing should exit. */ public static ThreadLocalLong timeoutAt = new ThreadLocalLong() { @Override protected Long initialValue() { return nanoTime() + Long.MAX_VALUE; } }; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7419) Initial value of thread local in SolrQueryTimeoutImpl overflows a long
[ https://issues.apache.org/jira/browse/SOLR-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502760#comment-14502760 ] Steve Rowe commented on SOLR-7419: -- bq. Yes, on overflow the JVM circles back to MIN_VALUE so even though it works fine, I don't see a reason for the initialValue to be assigned this way. The idea is to put {{timeoutAt}} as far in the future as possible, so that it effectively never happens. bq. In this case, we should just set the initial value to Long.MAX_VALUE I don't think that will work, since nanoTime() values can be anything from Long.MIN_VALUE to Long.MAX_VALUE - that's my interpretation anyway of this from [the {{nanoTime()}} javadocs|https://docs.oracle.com/javase/8/docs/api/java/lang/System.html#nanoTime--]: {quote} This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. The value returned represents nanoseconds since some fixed but arbitrary origin time (perhaps in the future, so values may be negative). {quote} Initial value of thread local in SolrQueryTimeoutImpl overflows a long -- Key: SOLR-7419 URL: https://issues.apache.org/jira/browse/SOLR-7419 Project: Solr Issue Type: Bug Components: search Affects Versions: 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Priority: Trivial Fix For: Trunk, 5.2 Same as the title. {code} /** * The ThreadLocal variable to store the time beyond which, the processing should exit. */ public static ThreadLocalLong timeoutAt = new ThreadLocalLong() { @Override protected Long initialValue() { return nanoTime() + Long.MAX_VALUE; } }; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Change line length setting in eclipse to 120 chars
The problem is that our scripts configure to break on 80 characters, which is annoying when I format lines, because I then need to rewrap them if I want. I'll open an issue with a change, people can object/support there. Thanks Toke! Shai On Mon, Apr 20, 2015 at 3:13 PM, Toke Eskildsen t...@statsbiblioteket.dk wrote: On Sat, 2015-04-18 at 10:07 +0300, Shai Erera wrote: Our dev-tools/eclipse configure the project to break lines on 80 characters. Are there objections to change it to 120? Line length was discussed back in 2013 (search for Line length in Lucene/Solr code) and AFAIR the conclusion was not to have a hard max, but to aim for = 120 characters/line. (it might be worth stating this on https://wiki.apache.org/solr/HowToContribute) Shai - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6441) Change default formatting settings to break lines at 120 characters
Shai Erera created LUCENE-6441: -- Summary: Change default formatting settings to break lines at 120 characters Key: LUCENE-6441 URL: https://issues.apache.org/jira/browse/LUCENE-6441 Project: Lucene - Core Issue Type: Improvement Components: -tools Reporter: Shai Erera Assignee: Shai Erera Priority: Minor Our eclipse settings default to break lines at 80 characters. This issue changes them to break lines at 120 characters. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Change line length setting in eclipse to 120 chars
On Sat, 2015-04-18 at 10:07 +0300, Shai Erera wrote: Our dev-tools/eclipse configure the project to break lines on 80 characters. Are there objections to change it to 120? Line length was discussed back in 2013 (search for Line length in Lucene/Solr code) and AFAIR the conclusion was not to have a hard max, but to aim for = 120 characters/line. (it might be worth stating this on https://wiki.apache.org/solr/HowToContribute) Shai - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Change line length setting in eclipse to 120 chars
+1 Please fix the Eclipse config to not force any breaks. Uwe Am 20. April 2015 14:13:15 MESZ, schrieb Toke Eskildsen t...@statsbiblioteket.dk: On Sat, 2015-04-18 at 10:07 +0300, Shai Erera wrote: Our dev-tools/eclipse configure the project to break lines on 80 characters. Are there objections to change it to 120? Line length was discussed back in 2013 (search for Line length in Lucene/Solr code) and AFAIR the conclusion was not to have a hard max, but to aim for = 120 characters/line. (it might be worth stating this on https://wiki.apache.org/solr/HowToContribute) Shai - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Uwe Schindler H.-H.-Meier-Allee 63, 28213 Bremen http://www.thetaphi.de
[jira] [Commented] (SOLR-7419) Initial value of thread local in SolrQueryTimeoutImpl overflows a long
[ https://issues.apache.org/jira/browse/SOLR-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502742#comment-14502742 ] Shalin Shekhar Mangar commented on SOLR-7419: - Thanks Steve. Yes, on overflow the JVM circles back to MIN_VALUE so even though it works fine, I don't see a reason for the initialValue to be assigned this way. Initial value of thread local in SolrQueryTimeoutImpl overflows a long -- Key: SOLR-7419 URL: https://issues.apache.org/jira/browse/SOLR-7419 Project: Solr Issue Type: Bug Components: search Affects Versions: 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Priority: Trivial Fix For: Trunk, 5.2 Same as the title. {code} /** * The ThreadLocal variable to store the time beyond which, the processing should exit. */ public static ThreadLocalLong timeoutAt = new ThreadLocalLong() { @Override protected Long initialValue() { return nanoTime() + Long.MAX_VALUE; } }; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7419) Initial value of thread local in SolrQueryTimeoutImpl overflows a long
[ https://issues.apache.org/jira/browse/SOLR-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502764#comment-14502764 ] Yonik Seeley commented on SOLR-7419: Right you are, I hadn't realized that This method can only be used to measure elapsed time and is not related to any other notion of system or wall-clock time. Initial value of thread local in SolrQueryTimeoutImpl overflows a long -- Key: SOLR-7419 URL: https://issues.apache.org/jira/browse/SOLR-7419 Project: Solr Issue Type: Bug Components: search Affects Versions: 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Priority: Trivial Fix For: Trunk, 5.2 Same as the title. {code} /** * The ThreadLocal variable to store the time beyond which, the processing should exit. */ public static ThreadLocalLong timeoutAt = new ThreadLocalLong() { @Override protected Long initialValue() { return nanoTime() + Long.MAX_VALUE; } }; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: Change line length setting in eclipse to 120 chars
+1. I have to constantly unwrap lines after formating. Better to make it the other way. - Mark On Mon, Apr 20, 2015 at 8:58 AM Uwe Schindler u...@thetaphi.de wrote: +1 Please fix the Eclipse config to not force any breaks. Uwe Am 20. April 2015 14:13:15 MESZ, schrieb Toke Eskildsen t...@statsbiblioteket.dk: On Sat, 2015-04-18 at 10:07 +0300, Shai Erera wrote: Our dev-tools/eclipse configure the project to break lines on 80 characters. Are there objections to change it to 120? Line length was discussed back in 2013 (search for Line length in Lucene/Solr code) and AFAIR the conclusion was not to have a hard max, but to aim for = 120 characters/line. (it might be worth stating this on https://wiki.apache.org/solr/HowToContribute) Shai To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail:dev-h...@lucene.apache.org -- Uwe Schindler H.-H.-Meier-Allee 63, 28213 Bremen http://www.thetaphi.de
[jira] [Updated] (LUCENE-6420) Update forbiddenapis to 1.8
[ https://issues.apache.org/jira/browse/LUCENE-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-6420: -- Attachment: LUCENE-6420-anno.patch New patch with [~steve_rowe]'s suggestions. Explanation follows! Update forbiddenapis to 1.8 --- Key: LUCENE-6420 URL: https://issues.apache.org/jira/browse/LUCENE-6420 Project: Lucene - Core Issue Type: Improvement Components: general/build Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: Trunk, 5.2 Attachments: LUCENE-6420-anno.patch, LUCENE-6420-anno.patch, LUCENE-6420-anno.patch, LUCENE-6420.patch Update forbidden-apis plugin to 1.8: - Initial support for Java 9 including JIGSAW - Errors are now reported sorted by line numbers and correctly grouped (synthetic methods/lambdas) - Package-level forbids: Deny all classes from a package: org.hatedpkg.** (also other globs work) - In addition to file-level excludes, forbiddenapis now supports fine granular excludes using Java annotations. You can use the one shipped, but define your own, e.g. inside Lucene and pass its name to forbidden (e.g. using a glob: **.SuppressForbidden would any annotation in any package to suppress errors). Annotation need to be on class level, no runtime annotation required. This will for now only update the dependency and remove the additional forbid by [~shalinmangar] for MessageFormat (which is now shipped with forbidden). But we should review and for example suppress forbidden failures in command line tools using @SuppressForbidden (or similar annotation). The discussion is open, I can make a patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6440) Show LuceneTestCase LiveIndexWriterConfig changes with deltas
[ https://issues.apache.org/jira/browse/LUCENE-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502521#comment-14502521 ] Michael McCandless commented on LUCENE-6440: +1 Show LuceneTestCase LiveIndexWriterConfig changes with deltas - Key: LUCENE-6440 URL: https://issues.apache.org/jira/browse/LUCENE-6440 Project: Lucene - Core Issue Type: Test Reporter: Robert Muir Attachments: LUCENE-6440.patch With tests.verbose, each time the IWC is changed the whole thing is printed out. But this is overly verbose during indexing and does not show you what changed, so you have to stare hard at tons of IWC.toString()s and figure it out. Instead I think we should just show a diff? {noformat} [junit4] 1 NOTE: LuceneTestCase: randomly changed IWC's live settings: [junit4] 1 - ramBufferSizeMB=16.0 [junit4] 1 + ramBufferSizeMB=3.0 [junit4] 1 - maxBufferedDocs=308 [junit4] 1 + maxBufferedDocs=-1 {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2978 - Failure
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2978/ 1 tests failed. REGRESSION: org.apache.solr.cloud.RecoveryZkTest.test Error Message: shard1 is not consistent. Got 1012 from http://127.0.0.1:19844/bt_c/f/collection1lastClient and got 259 from http://127.0.0.1:19847/bt_c/f/collection1 Stack Trace: java.lang.AssertionError: shard1 is not consistent. Got 1012 from http://127.0.0.1:19844/bt_c/f/collection1lastClient and got 259 from http://127.0.0.1:19847/bt_c/f/collection1 at __randomizedtesting.SeedInfo.seed([DD1E97CF1811BA97:554AA815B6EDD76F]:0) at org.junit.Assert.fail(Assert.java:93) at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:123) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (LUCENE-6440) Show LuceneTestCase LiveIndexWriterConfig changes with deltas
[ https://issues.apache.org/jira/browse/LUCENE-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502478#comment-14502478 ] Adrien Grand commented on LUCENE-6440: -- +1 Show LuceneTestCase LiveIndexWriterConfig changes with deltas - Key: LUCENE-6440 URL: https://issues.apache.org/jira/browse/LUCENE-6440 Project: Lucene - Core Issue Type: Test Reporter: Robert Muir Attachments: LUCENE-6440.patch With tests.verbose, each time the IWC is changed the whole thing is printed out. But this is overly verbose during indexing and does not show you what changed, so you have to stare hard at tons of IWC.toString()s and figure it out. Instead I think we should just show a diff? {noformat} [junit4] 1 NOTE: LuceneTestCase: randomly changed IWC's live settings: [junit4] 1 - ramBufferSizeMB=16.0 [junit4] 1 + ramBufferSizeMB=3.0 [junit4] 1 - maxBufferedDocs=308 [junit4] 1 + maxBufferedDocs=-1 {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-7430) Encrypted pptx/xlsx causes a ClassNotFoundException
Jon Scharff created SOLR-7430: - Summary: Encrypted pptx/xlsx causes a ClassNotFoundException Key: SOLR-7430 URL: https://issues.apache.org/jira/browse/SOLR-7430 Project: Solr Issue Type: Bug Components: contrib - Solr Cell (Tika extraction) Affects Versions: 5.1 Environment: Windows 7 (64bit) jre 1.8.0_40-b26 (64 bit) Reporter: Jon Scharff When indexing an encrypted pptx or xlsx file via the command solr-homejava -Dc=core -Dauto=yes -Ddata=files -jar example\exampledocs\post.jar file.pptx on a server started with solr-homebin\solr start a ClassNotFoundException results instead of a EncryptedDocumentException. It appears that poi is using reflection to get the proper encryption handler, but the necessary jar files are not supplied by jetty's ClassLoader. A portion of the resulting error trace is below. org.apache.solr.common.SolrException: org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from org.apache.tika.parser.microsoft.OfficeParser@2e973e0f at org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:227) ... Caused by: org.apache.tika.exception.TikaException: TIKA-198: Illegal IOException from org.apache.tika.parser.microsoft.OfficeParser@2e973e0f at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:262) at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256) at org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:120) at org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:221) ... 31 more Caused by: java.io.IOException: java.lang.ClassNotFoundException: org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder at org.apache.poi.poifs.crypt.EncryptionInfo.lt;initgt;(EncryptionInfo.java:69) at org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:228) at org.apache.tika.parser.microsoft.OfficeParser.parse(OfficeParser.java:172) at org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:256) ... 34 more Caused by: java.lang.ClassNotFoundException: org.apache.poi.poifs.crypt.agile.AgileEncryptionInfoBuilder at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:430) at org.eclipse.jetty.webapp.WebAppClassLoader.loadClass(WebAppClassLoader.java:383) at org.apache.poi.poifs.crypt.EncryptionInfo.getBuilder(EncryptionInfo.java:150) at org.apache.poi.poifs.crypt.EncryptionInfo.lt;initgt;(EncryptionInfo.java:67) ... 37 more -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6420) Update forbiddenapis to 1.8
[ https://issues.apache.org/jira/browse/LUCENE-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502454#comment-14502454 ] Uwe Schindler commented on LUCENE-6420: --- bq. lucene/test-framework/pom.xml.template and solr/core/src/test/pom.xml.template aren't modified, but likely should be - I think the specializations there can be removed. They cannot be completely removed: - test-frameworks executes test-checks on the standard src/mainfolder (this is also special in Ant build). But the solution is much easier here: It just inherits the shared execution, but changes the goal to check instead of testCheck. The config is inherited, so it executes the test configuration on the src/main folder (like in Ant) - solr/core/src/test/pom.xml is special, because it still excludes the imported-commons-csv tests. I simplified this by also inheriting the config, just adding the exclude. This is similar to what Ant does. bq. lucene/benchmark/pom.xml.template and lucene/demo/pom.xml.template should probably have lucene.txt added to their signaturesFiles. - I solved this in a similar way by inheriting the parent configuration and just overriding the bundledSignatures config (without jdk-system-out). bq. Also, if I understand how things are setup, the new annotation suppresses all forms of forbiddenapi checking, as compared to the previous configuration, where there were multiple executions, and exceptions were targetted at a particular check (e.g. sysout), but didn't prevent other checks from running. In the maven build this represents a loss of coverage everywhere the annoatations are used, doesn't it? Not sure about the Ant build. Yes and No :-) You are right, we miss some coverage (also in Ant build), but we get more coverage on the other side, because we can exclude in a more fine-granular way (on method level). I thought about this already, one solution might be (but lets keep this for later): For the very common sysout-stuff, we can add a separate {{@SuppressForbiddenSysout}} so we can scan in 2 executions. We should discuss this in a separate issue. I did not want to add too many annotations yet. On the other hand, because we can now work more fine-granular, I would suggest to refactor the code a bit and move the violations to separate methods (like I did in the DocSetPerf tester) and only exclude those, so we don't have to exclude the whole method. For the command line tools, we might add a private method to the class containing the main method called printout(String) that is suppressed. Update forbiddenapis to 1.8 --- Key: LUCENE-6420 URL: https://issues.apache.org/jira/browse/LUCENE-6420 Project: Lucene - Core Issue Type: Improvement Components: general/build Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: Trunk, 5.2 Attachments: LUCENE-6420-anno.patch, LUCENE-6420-anno.patch, LUCENE-6420-anno.patch, LUCENE-6420.patch Update forbidden-apis plugin to 1.8: - Initial support for Java 9 including JIGSAW - Errors are now reported sorted by line numbers and correctly grouped (synthetic methods/lambdas) - Package-level forbids: Deny all classes from a package: org.hatedpkg.** (also other globs work) - In addition to file-level excludes, forbiddenapis now supports fine granular excludes using Java annotations. You can use the one shipped, but define your own, e.g. inside Lucene and pass its name to forbidden (e.g. using a glob: **.SuppressForbidden would any annotation in any package to suppress errors). Annotation need to be on class level, no runtime annotation required. This will for now only update the dependency and remove the additional forbid by [~shalinmangar] for MessageFormat (which is now shipped with forbidden). But we should review and for example suppress forbidden failures in command line tools using @SuppressForbidden (or similar annotation). The discussion is open, I can make a patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2154 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2154/ Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC 1 tests failed. FAILED: org.apache.solr.schema.TestCloudManagedSchemaConcurrent.test Error Message: QUERY FAILED: xpath=/response/lst[@name='responseHeader']/int[@name='status'][.='0'] request=/schema/dynamicfields/newdynamicfieldPut0_*?wt=xmlupdateTimeoutSecs=15 response=?xml version=1.0 encoding=UTF-8? response lst name=responseHeader int name=status500/int int name=QTime29608/int /lst lst name=error str name=msg7 out of 8 replicas failed to update their schema to version 4 within 15 seconds! Failed cores: [https://127.0.0.1:50913/wb_/collection1/, https://127.0.0.1:50894/wb_/collection1/, https://127.0.0.1:50920/wb_/collection1/, https://127.0.0.1:50897/wb_/collection1/, https://127.0.0.1:50930/wb_/collection1/, https://127.0.0.1:50901/wb_/collection1/, https://127.0.0.1:50939/wb_/collection1/]/str str name=traceorg.apache.solr.common.SolrException: 7 out of 8 replicas failed to update their schema to version 4 within 15 seconds! Failed cores: [https://127.0.0.1:50913/wb_/collection1/, https://127.0.0.1:50894/wb_/collection1/, https://127.0.0.1:50920/wb_/collection1/, https://127.0.0.1:50897/wb_/collection1/, https://127.0.0.1:50930/wb_/collection1/, https://127.0.0.1:50901/wb_/collection1/, https://127.0.0.1:50939/wb_/collection1/] at org.apache.solr.schema.ManagedIndexSchema.waitForSchemaZkVersionAgreement(ManagedIndexSchema.java:260) at org.apache.solr.rest.schema.BaseFieldResource.waitForSchemaUpdateToPropagate(BaseFieldResource.java:120) at org.apache.solr.rest.schema.DynamicFieldResource.put(DynamicFieldResource.java:185) at org.restlet.resource.ServerResource.doHandle(ServerResource.java:432) at org.restlet.resource.ServerResource.doConditionalHandle(ServerResource.java:350) at org.restlet.resource.ServerResource.handle(ServerResource.java:952) at org.restlet.resource.Finder.handle(Finder.java:246) at org.restlet.routing.Filter.doHandle(Filter.java:159) at org.restlet.routing.Filter.handle(Filter.java:206) at org.restlet.routing.Router.doHandle(Router.java:431) at org.restlet.routing.Router.handle(Router.java:648) at org.restlet.routing.Filter.doHandle(Filter.java:159) at org.restlet.routing.Filter.handle(Filter.java:206) at org.restlet.routing.Filter.doHandle(Filter.java:159) at org.restlet.routing.Filter.handle(Filter.java:206) at org.restlet.routing.Filter.doHandle(Filter.java:159) at org.restlet.engine.application.StatusFilter.doHandle(StatusFilter.java:155) at org.restlet.routing.Filter.handle(Filter.java:206) at org.restlet.routing.Filter.doHandle(Filter.java:159) at org.restlet.routing.Filter.handle(Filter.java:206) at org.restlet.engine.CompositeHelper.handle(CompositeHelper.java:211) at org.restlet.engine.application.ApplicationHelper.handle(ApplicationHelper.java:84) at org.restlet.Application.handle(Application.java:381) at org.restlet.routing.Filter.doHandle(Filter.java:159) at org.restlet.routing.Filter.handle(Filter.java:206) at org.restlet.routing.Router.doHandle(Router.java:431) at org.restlet.routing.Router.handle(Router.java:648) at org.restlet.routing.Filter.doHandle(Filter.java:159) at org.restlet.routing.Filter.handle(Filter.java:206) at org.restlet.routing.Router.doHandle(Router.java:431) at org.restlet.routing.Router.handle(Router.java:648) at org.restlet.routing.Filter.doHandle(Filter.java:159) at org.restlet.routing.Filter.handle(Filter.java:206) at org.restlet.engine.CompositeHelper.handle(CompositeHelper.java:211) at org.restlet.Component.handle(Component.java:392) at org.restlet.Server.handle(Server.java:516) at org.restlet.engine.ServerHelper.handle(ServerHelper.java:72) at org.restlet.engine.adapter.HttpServerHelper.handle(HttpServerHelper.java:152) at org.restlet.ext.servlet.ServerServlet.service(ServerServlet.java:1089) at javax.servlet.http.HttpServlet.service(HttpServlet.java:848) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:669) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:457) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137) at org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135) at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:276) at
[jira] [Commented] (LUCENE-6439) Create test-framework/src/test
[ https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502524#comment-14502524 ] Michael McCandless commented on LUCENE-6439: +1 Create test-framework/src/test -- Key: LUCENE-6439 URL: https://issues.apache.org/jira/browse/LUCENE-6439 Project: Lucene - Core Issue Type: Test Reporter: Robert Muir Attachments: LUCENE-6439.patch We have quite a few tests (~30 suites) for test-framework stuff (test-the-tester) but currently they all sit in lucene/core housed with real tests. I think we should just give test-framework a src/test and move these tests there. This makes the build simpler in the future too, because its less special. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (LUCENE-6420) Update forbiddenapis to 1.8
[ https://issues.apache.org/jira/browse/LUCENE-6420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Uwe Schindler updated LUCENE-6420: -- Attachment: LUCENE-6420-anno.patch More improvements and simplifications: - Created a separate execution for sysout checks (like in lucene). I also compared what happens in Lucene and Solr it is now identical - I just suppressed the sysout checks in demo and benchmark - in lucene and solr test-framework it both runs the test config, but using goal check, so it chooses right source folder. At the end we can now easily add a new annotation for sysout excludes. I think its ready. [~steve_rowe] if you could have a look. Update forbiddenapis to 1.8 --- Key: LUCENE-6420 URL: https://issues.apache.org/jira/browse/LUCENE-6420 Project: Lucene - Core Issue Type: Improvement Components: general/build Reporter: Uwe Schindler Assignee: Uwe Schindler Fix For: Trunk, 5.2 Attachments: LUCENE-6420-anno.patch, LUCENE-6420-anno.patch, LUCENE-6420-anno.patch, LUCENE-6420-anno.patch, LUCENE-6420.patch Update forbidden-apis plugin to 1.8: - Initial support for Java 9 including JIGSAW - Errors are now reported sorted by line numbers and correctly grouped (synthetic methods/lambdas) - Package-level forbids: Deny all classes from a package: org.hatedpkg.** (also other globs work) - In addition to file-level excludes, forbiddenapis now supports fine granular excludes using Java annotations. You can use the one shipped, but define your own, e.g. inside Lucene and pass its name to forbidden (e.g. using a glob: **.SuppressForbidden would any annotation in any package to suppress errors). Annotation need to be on class level, no runtime annotation required. This will for now only update the dependency and remove the additional forbid by [~shalinmangar] for MessageFormat (which is now shipped with forbidden). But we should review and for example suppress forbidden failures in command line tools using @SuppressForbidden (or similar annotation). The discussion is open, I can make a patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_40) - Build # 4703 - Failure!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4703/ Java: 64bit/jdk1.8.0_40 -XX:-UseCompressedOops -XX:+UseG1GC 4 tests failed. FAILED: org.apache.solr.cloud.BasicDistributedZkTest.test Error Message: commitWithin did not work on node: http://127.0.0.1:53641/collection1 expected:68 but was:67 Stack Trace: java.lang.AssertionError: commitWithin did not work on node: http://127.0.0.1:53641/collection1 expected:68 but was:67 at __randomizedtesting.SeedInfo.seed([15E45893BF5C1CD8:9DB0674911A07120]:0) at org.junit.Assert.fail(Assert.java:93) at org.junit.Assert.failNotEquals(Assert.java:647) at org.junit.Assert.assertEquals(Assert.java:128) at org.junit.Assert.assertEquals(Assert.java:472) at org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:344) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55) at
Re: Change line length setting in eclipse to 120 chars
Opene LUCENE-6441 with a patch. Shai On Mon, Apr 20, 2015 at 4:23 PM, Mark Miller markrmil...@gmail.com wrote: +1. I have to constantly unwrap lines after formating. Better to make it the other way. - Mark On Mon, Apr 20, 2015 at 8:58 AM Uwe Schindler u...@thetaphi.de wrote: +1 Please fix the Eclipse config to not force any breaks. Uwe Am 20. April 2015 14:13:15 MESZ, schrieb Toke Eskildsen t...@statsbiblioteket.dk: On Sat, 2015-04-18 at 10:07 +0300, Shai Erera wrote: Our dev-tools/eclipse configure the project to break lines on 80 characters. Are there objections to change it to 120? Line length was discussed back in 2013 (search for Line length in Lucene/Solr code) and AFAIR the conclusion was not to have a hard max, but to aim for = 120 characters/line. (it might be worth stating this on https://wiki.apache.org/solr/HowToContribute) Shai To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail:dev-h...@lucene.apache.org -- Uwe Schindler H.-H.-Meier-Allee 63, 28213 Bremen http://www.thetaphi.de
[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON
[ https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502904#comment-14502904 ] Per Steffensen commented on SOLR-7176: -- bq. Why can't we just eliminate the overseer from the picture completely? Not that it is very important in this case, but there is a problem in general with having several threads doing fetch - update-locally - store on state concurrently without locking (pessimistically or optimistically). Example, two threads running concurrently * Thread#1 wants to do the task of setting urlScheme to http: ** fetches {urlScheme:https, autoAddReplicas: true} ** changes it to {urlScheme:http, autoAddReplicas: true} and stores it * Thread#2 wants to do the task of setting autoAddReplicas to false: ** fetches {urlScheme:https, autoAddReplicas: true} ** changes it to {urlScheme:https, autoAddReplicas: false} and stores it Without locking they can run concurrently and you will end up with a wrong state * {urlScheme:http, autoAddReplicas: true} * or {urlScheme:https, autoAddReplicas: false} But you actually expected {urlScheme:http, autoAddReplicas: false} I do not know what the initial thought was about the Overseer, but I think of it as a simple way to get around this locking - making sure that there is never more than one thread updating state. When that is said, if the above was the intention with the Overseer, it does not work today, because CollectionsHandler.handleProp is doing the fetch and update, and only leaves the storing to Overseer. I would like to see the entire job handed over to the Overseer, so that it does both fetch, update and store - so that you can avoid the concurrency scenario above. In general Overseer should execute entire admin-jobs and not only parts of them. Anyway, it is a reason not to do this kind of updates without taking locks, and Overseer is a primitive way of taking lock, and maybe therefore do not eliminate the Overseer. I am not sure it is especially important here. allow zkcli to modify JSON -- Key: SOLR-7176 URL: https://issues.apache.org/jira/browse/SOLR-7176 Project: Solr Issue Type: New Feature Reporter: Yonik Seeley Assignee: Noble Paul Priority: Minor Attachments: SOLR-7176.patch, SOLR-7176.patch To enable SSL, we have instructions like the following: {code} server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put /clusterprops.json '{urlScheme:https}' {code} Overwriting the value won't work well when we have more properties to put in clusterprops. We should be able to change individual values or perhaps merge values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6220) Replica placement strategy for solrcloud
[ https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-6220: - Attachment: SOLR-6220.patch Replica placement strategy for solrcloud Key: SOLR-6220 URL: https://issues.apache.org/jira/browse/SOLR-6220 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-6220.patch, SOLR-6220.patch, SOLR-6220.patch h1.Objective Most cloud based systems allow to specify rules on how the replicas/nodes of a cluster are allocated . Solr should have a flexible mechanism through which we should be able to control allocation of replicas or later change it to suit the needs of the system All configurations are per collection basis. The rules are applied whenever a replica is created in any of the shards in a given collection during * collection creation * shard splitting * add replica * createsshard There are two aspects to how replicas are placed: snitch and placement. h2.snitch How to identify the tags of nodes. Snitches are configured through collection create command with the snitch prefix . eg: snitch.type=EC2Snitch. The system provides the following implicit tag names which cannot be used by other snitches * node : The solr nodename * host : The hostname * ip : The ip address of the host * cores : This is a dynamic varibale which gives the core count at any given point * disk : This is a dynamic variable which gives the available disk space at any given point There will a few snitches provided by the system such as h3.EC2Snitch Provides two tags called dc, rack from the region and zone values in EC2 h3.IPSnitch Use the IP to infer the “dc” and “rack” values h3.NodePropertySnitch This lets users provide system properties to each node with tagname and value . example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this particular node will have two tags “tag-x” and “tag-y” . h3.RestSnitch Which lets the user configure a url which the server can invoke and get all the tags for a given node. This takes extra parameters in create command example: {{snitch={type=RestSnitch,url=http://snitchserverhost:port/[node]}} The response of the rest call {{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}} must be in json format eg: {code:JavaScript} { “tag-x”:”x-val”, “tag-y”:”y-val” } {code} h3.ManagedSnitch This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The user should be able to manage the tags and values of each node through a collection API h2.Rules This tells how many replicas for a given shard needs to be assigned to nodes with the given key value pairs. These parameters will be passed on to the collection CREATE api as a multivalued parameter rule . The values will be saved in the state of the collection as follows {code:Javascript} { “mycollection”:{ “snitch”: { type:“EC2Snitch” } “rules”:[ {“shard”: “value1”, “replica”: “value2”, tag1:val1}, {“shard”: “value1”, “replica”: “value2”, tag2:val2} ] } {code} A rule is specified as a pseudo JSON syntax . which is a map of keys and values *Each collection can have any number of rules. As long as the rules do not conflict with each other it should be OK. Or else an error is thrown * In each rule , shard and replica can be omitted ** default value of replica is {{\*}} means ANY or you can specify a count and an operand such as {{+}} or {{-}} ** and the value of shard can be a shard name or {{\*}} means EACH or {{**}} means ANY. default value is {{\*\*}} (ANY) * There should be exactly one extra condition in a rule other than {{shard}} and {{replica}}. * all keys other than {{shard}} and {{replica}} are called tags and the tags are nothing but values provided by the snitch for each node * By default certain tags such as {{node}}, {{host}}, {{port}} are provided by the system implicitly Examples: {noformat} //in each rack there can be max two replicas of A given shard {rack:*,shard:*,replica:2-} //in each rack there can be max two replicas of ANY replica {rack:*,shard:**,replica:2-} {rack:*,replica:2-} //in each node there should be a max one replica of EACH shard {node:*,shard:*,replica:1-} //in each node there should be a max one replica of ANY shard {node:*,shard:**,replica:1-} {node:*,replica:1-} //In rack 738 and shard=shard1, there can be a max 0 replica {rack:738,shard:shard1,replica:0-} //All replicas of shard1 should go to rack 730 {shard:shard1,replica:*,rack:730} {shard:shard1,rack:730} // all replicas must be created in a node with at least 20GB disk {replica:*,shard:*,disk:20+}
[jira] [Updated] (SOLR-6220) Replica placement strategy for solrcloud
[ https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-6220: - Attachment: (was: SOLR-6220.patch) Replica placement strategy for solrcloud Key: SOLR-6220 URL: https://issues.apache.org/jira/browse/SOLR-6220 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-6220.patch, SOLR-6220.patch, SOLR-6220.patch h1.Objective Most cloud based systems allow to specify rules on how the replicas/nodes of a cluster are allocated . Solr should have a flexible mechanism through which we should be able to control allocation of replicas or later change it to suit the needs of the system All configurations are per collection basis. The rules are applied whenever a replica is created in any of the shards in a given collection during * collection creation * shard splitting * add replica * createsshard There are two aspects to how replicas are placed: snitch and placement. h2.snitch How to identify the tags of nodes. Snitches are configured through collection create command with the snitch prefix . eg: snitch.type=EC2Snitch. The system provides the following implicit tag names which cannot be used by other snitches * node : The solr nodename * host : The hostname * ip : The ip address of the host * cores : This is a dynamic varibale which gives the core count at any given point * disk : This is a dynamic variable which gives the available disk space at any given point There will a few snitches provided by the system such as h3.EC2Snitch Provides two tags called dc, rack from the region and zone values in EC2 h3.IPSnitch Use the IP to infer the “dc” and “rack” values h3.NodePropertySnitch This lets users provide system properties to each node with tagname and value . example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this particular node will have two tags “tag-x” and “tag-y” . h3.RestSnitch Which lets the user configure a url which the server can invoke and get all the tags for a given node. This takes extra parameters in create command example: {{snitch={type=RestSnitch,url=http://snitchserverhost:port/[node]}} The response of the rest call {{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}} must be in json format eg: {code:JavaScript} { “tag-x”:”x-val”, “tag-y”:”y-val” } {code} h3.ManagedSnitch This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The user should be able to manage the tags and values of each node through a collection API h2.Rules This tells how many replicas for a given shard needs to be assigned to nodes with the given key value pairs. These parameters will be passed on to the collection CREATE api as a multivalued parameter rule . The values will be saved in the state of the collection as follows {code:Javascript} { “mycollection”:{ “snitch”: { type:“EC2Snitch” } “rules”:[ {“shard”: “value1”, “replica”: “value2”, tag1:val1}, {“shard”: “value1”, “replica”: “value2”, tag2:val2} ] } {code} A rule is specified as a pseudo JSON syntax . which is a map of keys and values *Each collection can have any number of rules. As long as the rules do not conflict with each other it should be OK. Or else an error is thrown * In each rule , shard and replica can be omitted ** default value of replica is {{\*}} means ANY or you can specify a count and an operand such as {{+}} or {{-}} ** and the value of shard can be a shard name or {{\*}} means EACH or {{**}} means ANY. default value is {{\*\*}} (ANY) * There should be exactly one extra condition in a rule other than {{shard}} and {{replica}}. * all keys other than {{shard}} and {{replica}} are called tags and the tags are nothing but values provided by the snitch for each node * By default certain tags such as {{node}}, {{host}}, {{port}} are provided by the system implicitly Examples: {noformat} //in each rack there can be max two replicas of A given shard {rack:*,shard:*,replica:2-} //in each rack there can be max two replicas of ANY replica {rack:*,shard:**,replica:2-} {rack:*,replica:2-} //in each node there should be a max one replica of EACH shard {node:*,shard:*,replica:1-} //in each node there should be a max one replica of ANY shard {node:*,shard:**,replica:1-} {node:*,replica:1-} //In rack 738 and shard=shard1, there can be a max 0 replica {rack:738,shard:shard1,replica:0-} //All replicas of shard1 should go to rack 730 {shard:shard1,replica:*,rack:730} {shard:shard1,rack:730} // all replicas must be created in a node with at least 20GB disk {replica:*,shard:*,disk:20+}
[jira] [Commented] (LUCENE-6422) Add StreamingQuadPrefixTree
[ https://issues.apache.org/jira/browse/LUCENE-6422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502886#comment-14502886 ] David Smiley commented on LUCENE-6422: -- I'll check out your patch tonight, tomorrow at the latest. Karl/Geo3d has kept me busy :-) RE naming: in both cases it seems the current names actually aren't bad relative to your suggestions. prune is a suffix of pruneLeafyBranches (current name is more descriptive; in any case one would still need to look at the javadocs to understand), and SpatialTrie is synonymous with SpatialPrefixTree given that Trie and PrefixTree are synonyms. I'm +1 to rename these as you want to 6.x if you think it's worth it. There are back-compat issues with renaming them _now_. Again, we agree more javadocs (including suggested alternative names) to add clarification now would be great. I'll create a patch and seek your input. RE sandbox: It's not clear to me what is really needed/useful. If someone comes along with some newfangled index/search spatial approach, it could go in the module and not hook into any existing interface... except a Lucene Query class, and something like a Lucene TokenStream/Field for indexing. Add StreamingQuadPrefixTree --- Key: LUCENE-6422 URL: https://issues.apache.org/jira/browse/LUCENE-6422 Project: Lucene - Core Issue Type: Improvement Components: modules/spatial Affects Versions: 5.x Reporter: Nicholas Knize Attachments: LUCENE-6422.patch, LUCENE-6422.patch, LUCENE-6422_with_SPT_factory_and_benchmark.patch To conform to Lucene's inverted index, SpatialStrategies use strings to represent QuadCells and GeoHash cells. Yielding 1 byte per QuadCell and 5 bits per GeoHash cell, respectively. To create the terms representing a Shape, the BytesRefIteratorTokenStream first builds all of the terms into an ArrayList of Cells in memory, then passes the ArrayList.Iterator back to invert() which creates a second lexicographically sorted array of Terms. This doubles the memory consumption when indexing a shape. This task introduces a PackedQuadPrefixTree that uses a StreamingStrategy to accomplish the following: 1. Create a packed 8byte representation for a QuadCell 2. Build the Packed cells 'on demand' when incrementToken is called Improvements over this approach include the generation of the packed cells using an AutoPrefixAutomaton -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7176) allow zkcli to modify JSON
[ https://issues.apache.org/jira/browse/SOLR-7176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502920#comment-14502920 ] Noble Paul commented on SOLR-7176: -- bq.Without locking they can run concurrently and you will end up with a wrong state No That is called compare and set in Zookeeper allow zkcli to modify JSON -- Key: SOLR-7176 URL: https://issues.apache.org/jira/browse/SOLR-7176 Project: Solr Issue Type: New Feature Reporter: Yonik Seeley Assignee: Noble Paul Priority: Minor Attachments: SOLR-7176.patch, SOLR-7176.patch To enable SSL, we have instructions like the following: {code} server/scripts/cloud-scripts/zkcli.sh -zkhost localhost:2181 -cmd put /clusterprops.json '{urlScheme:https}' {code} Overwriting the value won't work well when we have more properties to put in clusterprops. We should be able to change individual values or perhaps merge values. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7419) Initial value of thread local in SolrQueryTimeoutImpl overflows a long
[ https://issues.apache.org/jira/browse/SOLR-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502919#comment-14502919 ] Shalin Shekhar Mangar commented on SOLR-7419: - Thanks Steve! Initial value of thread local in SolrQueryTimeoutImpl overflows a long -- Key: SOLR-7419 URL: https://issues.apache.org/jira/browse/SOLR-7419 Project: Solr Issue Type: Bug Components: search Affects Versions: 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Assignee: Steve Rowe Priority: Trivial Fix For: Trunk, 5.2 Same as the title. {code} /** * The ThreadLocal variable to store the time beyond which, the processing should exit. */ public static ThreadLocalLong timeoutAt = new ThreadLocalLong() { @Override protected Long initialValue() { return nanoTime() + Long.MAX_VALUE; } }; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6220) Replica placement strategy for solrcloud
[ https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502868#comment-14502868 ] Shalin Shekhar Mangar commented on SOLR-6220: - {quote} //in each node there should be a max one replica of EACH shard {node:*,shard:*,replica:1-} {quote} Instead of 1- can we use regular , =, , = operators? For example replica:=1 to signal that a maximum of 1 replica is required and disk:=20 can signal that the chosen node must have more than 20GB of space Replica placement strategy for solrcloud Key: SOLR-6220 URL: https://issues.apache.org/jira/browse/SOLR-6220 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-6220.patch, SOLR-6220.patch, SOLR-6220.patch h1.Objective Most cloud based systems allow to specify rules on how the replicas/nodes of a cluster are allocated . Solr should have a flexible mechanism through which we should be able to control allocation of replicas or later change it to suit the needs of the system All configurations are per collection basis. The rules are applied whenever a replica is created in any of the shards in a given collection during * collection creation * shard splitting * add replica * createsshard There are two aspects to how replicas are placed: snitch and placement. h2.snitch How to identify the tags of nodes. Snitches are configured through collection create command with the snitch prefix . eg: snitch.type=EC2Snitch. The system provides the following implicit tag names which cannot be used by other snitches * node : The solr nodename * host : The hostname * ip : The ip address of the host * cores : This is a dynamic varibale which gives the core count at any given point * disk : This is a dynamic variable which gives the available disk space at any given point There will a few snitches provided by the system such as h3.EC2Snitch Provides two tags called dc, rack from the region and zone values in EC2 h3.IPSnitch Use the IP to infer the “dc” and “rack” values h3.NodePropertySnitch This lets users provide system properties to each node with tagname and value . example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this particular node will have two tags “tag-x” and “tag-y” . h3.RestSnitch Which lets the user configure a url which the server can invoke and get all the tags for a given node. This takes extra parameters in create command example: {{snitch={type=RestSnitch,url=http://snitchserverhost:port/[node]}} The response of the rest call {{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}} must be in json format eg: {code:JavaScript} { “tag-x”:”x-val”, “tag-y”:”y-val” } {code} h3.ManagedSnitch This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The user should be able to manage the tags and values of each node through a collection API h2.Rules This tells how many replicas for a given shard needs to be assigned to nodes with the given key value pairs. These parameters will be passed on to the collection CREATE api as a multivalued parameter rule . The values will be saved in the state of the collection as follows {code:Javascript} { “mycollection”:{ “snitch”: { type:“EC2Snitch” } “rules”:[ {“shard”: “value1”, “replica”: “value2”, tag1:val1}, {“shard”: “value1”, “replica”: “value2”, tag2:val2} ] } {code} A rule is specified as a pseudo JSON syntax . which is a map of keys and values *Each collection can have any number of rules. As long as the rules do not conflict with each other it should be OK. Or else an error is thrown * In each rule , shard and replica can be omitted ** default value of replica is {{\*}} means ANY or you can specify a count and an operand such as {{+}} or {{-}} ** and the value of shard can be a shard name or {{\*}} means EACH or {{**}} means ANY. default value is {{\*\*}} (ANY) * There should be exactly one extra condition in a rule other than {{shard}} and {{replica}}. * all keys other than {{shard}} and {{replica}} are called tags and the tags are nothing but values provided by the snitch for each node * By default certain tags such as {{node}}, {{host}}, {{port}} are provided by the system implicitly Examples: {noformat} //in each rack there can be max two replicas of A given shard {rack:*,shard:*,replica:2-} //in each rack there can be max two replicas of ANY replica {rack:*,shard:**,replica:2-} {rack:*,replica:2-} //in each node there should be a max one replica of EACH shard {node:*,shard:*,replica:1-} //in each node there should be a max one replica of ANY shard
[jira] [Commented] (SOLR-7419) Initial value of thread local in SolrQueryTimeoutImpl overflows a long
[ https://issues.apache.org/jira/browse/SOLR-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502911#comment-14502911 ] ASF subversion and git services commented on SOLR-7419: --- Commit 1674867 from [~steve_rowe] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1674867 ] SOLR-7419: document intentional overflow in SolrQueryTimeoutImpl thread local (merged trunk r1674866) Initial value of thread local in SolrQueryTimeoutImpl overflows a long -- Key: SOLR-7419 URL: https://issues.apache.org/jira/browse/SOLR-7419 Project: Solr Issue Type: Bug Components: search Affects Versions: 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Priority: Trivial Fix For: Trunk, 5.2 Same as the title. {code} /** * The ThreadLocal variable to store the time beyond which, the processing should exit. */ public static ThreadLocalLong timeoutAt = new ThreadLocalLong() { @Override protected Long initialValue() { return nanoTime() + Long.MAX_VALUE; } }; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: VOTE: RC0 Release apache-solr-ref-guide-5.1.pdf
+1 Found a hyperlink issue on a page but that can be fixed later. On Sat, Apr 18, 2015 at 7:59 PM, Yonik Seeley ysee...@gmail.com wrote: +1 -Yonik On Fri, Apr 17, 2015 at 10:34 AM, Cassandra Targett casstarg...@gmail.com wrote: Please vote for the release of the Apache Solr Reference Guide for Solr 5.1. The PDF is available at: https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.1-RC0/ Steve Rowe I made some big changes to the styling of the guide, so please raise any issues you find in your review. Here's my +1. Thanks, Cassandra - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org -- Regards, Varun Thacker
[jira] [Commented] (LUCENE-6440) Show LuceneTestCase LiveIndexWriterConfig changes with deltas
[ https://issues.apache.org/jira/browse/LUCENE-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502986#comment-14502986 ] ASF subversion and git services commented on LUCENE-6440: - Commit 1674914 from [~rcmuir] in branch 'dev/branches/branch_5x' [ https://svn.apache.org/r1674914 ] LUCENE-6440: Show LuceneTestCase LiveIndexWriterConfig changes with deltas Show LuceneTestCase LiveIndexWriterConfig changes with deltas - Key: LUCENE-6440 URL: https://issues.apache.org/jira/browse/LUCENE-6440 Project: Lucene - Core Issue Type: Test Reporter: Robert Muir Fix For: Trunk, 5.2 Attachments: LUCENE-6440.patch With tests.verbose, each time the IWC is changed the whole thing is printed out. But this is overly verbose during indexing and does not show you what changed, so you have to stare hard at tons of IWC.toString()s and figure it out. Instead I think we should just show a diff? {noformat} [junit4] 1 NOTE: LuceneTestCase: randomly changed IWC's live settings: [junit4] 1 - ramBufferSizeMB=16.0 [junit4] 1 + ramBufferSizeMB=3.0 [junit4] 1 - maxBufferedDocs=308 [junit4] 1 + maxBufferedDocs=-1 {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
Re: VOTE: RC0 Release apache-solr-ref-guide-5.1.pdf
+1 On Fri, Apr 17, 2015 at 8:04 PM, Cassandra Targett casstarg...@gmail.com wrote: Please vote for the release of the Apache Solr Reference Guide for Solr 5.1. The PDF is available at: https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.1-RC0/ Steve Rowe I made some big changes to the styling of the guide, so please raise any issues you find in your review. Here's my +1. Thanks, Cassandra -- Regards, Shalin Shekhar Mangar.
[jira] [Created] (LUCENE-6442) Add a mockfs with unpredictable but deterministic file listing order
Robert Muir created LUCENE-6442: --- Summary: Add a mockfs with unpredictable but deterministic file listing order Key: LUCENE-6442 URL: https://issues.apache.org/jira/browse/LUCENE-6442 Project: Lucene - Core Issue Type: Task Reporter: Robert Muir Any test that processes with directory listing apis (Directory.listAll(), DirectoryStream, walkFileTree, etc) and does not sort the results can cause reproducibility difficulties, because it might e.g. consume from random() in a different order and so on. We can instead sort and shuffle in a predictable way per-class based on the random seed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6440) Show LuceneTestCase LiveIndexWriterConfig changes with deltas
[ https://issues.apache.org/jira/browse/LUCENE-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502977#comment-14502977 ] ASF subversion and git services commented on LUCENE-6440: - Commit 1674912 from [~rcmuir] in branch 'dev/trunk' [ https://svn.apache.org/r1674912 ] LUCENE-6440: Show LuceneTestCase LiveIndexWriterConfig changes with deltas Show LuceneTestCase LiveIndexWriterConfig changes with deltas - Key: LUCENE-6440 URL: https://issues.apache.org/jira/browse/LUCENE-6440 Project: Lucene - Core Issue Type: Test Reporter: Robert Muir Attachments: LUCENE-6440.patch With tests.verbose, each time the IWC is changed the whole thing is printed out. But this is overly verbose during indexing and does not show you what changed, so you have to stare hard at tons of IWC.toString()s and figure it out. Instead I think we should just show a diff? {noformat} [junit4] 1 NOTE: LuceneTestCase: randomly changed IWC's live settings: [junit4] 1 - ramBufferSizeMB=16.0 [junit4] 1 + ramBufferSizeMB=3.0 [junit4] 1 - maxBufferedDocs=308 [junit4] 1 + maxBufferedDocs=-1 {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (LUCENE-6440) Show LuceneTestCase LiveIndexWriterConfig changes with deltas
[ https://issues.apache.org/jira/browse/LUCENE-6440?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Muir resolved LUCENE-6440. - Resolution: Fixed Fix Version/s: 5.2 Trunk Show LuceneTestCase LiveIndexWriterConfig changes with deltas - Key: LUCENE-6440 URL: https://issues.apache.org/jira/browse/LUCENE-6440 Project: Lucene - Core Issue Type: Test Reporter: Robert Muir Fix For: Trunk, 5.2 Attachments: LUCENE-6440.patch With tests.verbose, each time the IWC is changed the whole thing is printed out. But this is overly verbose during indexing and does not show you what changed, so you have to stare hard at tons of IWC.toString()s and figure it out. Instead I think we should just show a diff? {noformat} [junit4] 1 NOTE: LuceneTestCase: randomly changed IWC's live settings: [junit4] 1 - ramBufferSizeMB=16.0 [junit4] 1 + ramBufferSizeMB=3.0 [junit4] 1 - maxBufferedDocs=308 [junit4] 1 + maxBufferedDocs=-1 {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-6220) Replica placement strategy for solrcloud
[ https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502881#comment-14502881 ] Noble Paul commented on SOLR-6220: -- The rules are passed as request params . example : {{rule=shard:*,disk=20+}} I considered using {{=}} or {{=}} . But , that means the user needs to escape {{=}} in the request. Which can badly affect the readability Replica placement strategy for solrcloud Key: SOLR-6220 URL: https://issues.apache.org/jira/browse/SOLR-6220 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-6220.patch, SOLR-6220.patch, SOLR-6220.patch h1.Objective Most cloud based systems allow to specify rules on how the replicas/nodes of a cluster are allocated . Solr should have a flexible mechanism through which we should be able to control allocation of replicas or later change it to suit the needs of the system All configurations are per collection basis. The rules are applied whenever a replica is created in any of the shards in a given collection during * collection creation * shard splitting * add replica * createsshard There are two aspects to how replicas are placed: snitch and placement. h2.snitch How to identify the tags of nodes. Snitches are configured through collection create command with the snitch prefix . eg: snitch.type=EC2Snitch. The system provides the following implicit tag names which cannot be used by other snitches * node : The solr nodename * host : The hostname * ip : The ip address of the host * cores : This is a dynamic varibale which gives the core count at any given point * disk : This is a dynamic variable which gives the available disk space at any given point There will a few snitches provided by the system such as h3.EC2Snitch Provides two tags called dc, rack from the region and zone values in EC2 h3.IPSnitch Use the IP to infer the “dc” and “rack” values h3.NodePropertySnitch This lets users provide system properties to each node with tagname and value . example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this particular node will have two tags “tag-x” and “tag-y” . h3.RestSnitch Which lets the user configure a url which the server can invoke and get all the tags for a given node. This takes extra parameters in create command example: {{snitch={type=RestSnitch,url=http://snitchserverhost:port/[node]}} The response of the rest call {{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}} must be in json format eg: {code:JavaScript} { “tag-x”:”x-val”, “tag-y”:”y-val” } {code} h3.ManagedSnitch This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The user should be able to manage the tags and values of each node through a collection API h2.Rules This tells how many replicas for a given shard needs to be assigned to nodes with the given key value pairs. These parameters will be passed on to the collection CREATE api as a multivalued parameter rule . The values will be saved in the state of the collection as follows {code:Javascript} { “mycollection”:{ “snitch”: { type:“EC2Snitch” } “rules”:[ {“shard”: “value1”, “replica”: “value2”, tag1:val1}, {“shard”: “value1”, “replica”: “value2”, tag2:val2} ] } {code} A rule is specified as a pseudo JSON syntax . which is a map of keys and values *Each collection can have any number of rules. As long as the rules do not conflict with each other it should be OK. Or else an error is thrown * In each rule , shard and replica can be omitted ** default value of replica is {{\*}} means ANY or you can specify a count and an operand such as {{+}} or {{-}} ** and the value of shard can be a shard name or {{\*}} means EACH or {{**}} means ANY. default value is {{\*\*}} (ANY) * There should be exactly one extra condition in a rule other than {{shard}} and {{replica}}. * all keys other than {{shard}} and {{replica}} are called tags and the tags are nothing but values provided by the snitch for each node * By default certain tags such as {{node}}, {{host}}, {{port}} are provided by the system implicitly Examples: {noformat} //in each rack there can be max two replicas of A given shard {rack:*,shard:*,replica:2-} //in each rack there can be max two replicas of ANY replica {rack:*,shard:**,replica:2-} {rack:*,replica:2-} //in each node there should be a max one replica of EACH shard {node:*,shard:*,replica:1-} //in each node there should be a max one replica of ANY shard {node:*,shard:**,replica:1-} {node:*,replica:1-} //In rack 738 and shard=shard1, there can be a max 0 replica
[jira] [Updated] (LUCENE-6441) Change default formatting settings to break lines at 120 characters
[ https://issues.apache.org/jira/browse/LUCENE-6441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shai Erera updated LUCENE-6441: --- Attachment: LUCENE-6441.patch Change default formatting settings to break lines at 120 characters --- Key: LUCENE-6441 URL: https://issues.apache.org/jira/browse/LUCENE-6441 Project: Lucene - Core Issue Type: Improvement Components: -tools Reporter: Shai Erera Assignee: Shai Erera Priority: Minor Attachments: LUCENE-6441.patch Our eclipse settings default to break lines at 80 characters. This issue changes them to break lines at 120 characters. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-7419) Initial value of thread local in SolrQueryTimeoutImpl overflows a long
[ https://issues.apache.org/jira/browse/SOLR-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Rowe resolved SOLR-7419. -- Resolution: Not A Problem Assignee: Steve Rowe Here's the comment I added to explain the intentional overflow: {code:java} Index: solr/core/src/java/org/apache/solr/search/SolrQueryTimeoutImpl.java === --- solr/core/src/java/org/apache/solr/search/SolrQueryTimeoutImpl.java (revision 1674843) +++ solr/core/src/java/org/apache/solr/search/SolrQueryTimeoutImpl.java (working copy) @@ -33,6 +33,23 @@ * The ThreadLocal variable to store the time beyond which, the processing should exit. */ public static ThreadLocalLong timeoutAt = new ThreadLocalLong() { +/** + * {@inheritDoc} + * p + * By default, timeoutAt is set as far in the future as possible, + * so that it effectively never happens. + * p + * Since nanoTime() values can be anything from Long.MIN_VALUE to + * Long.MAX_VALUE, adding Long.MAX_VALUE can cause overflow. That's + * expected and works fine, since in that case the subtraction of a + * future nanoTime() value from timeoutAt (in + * {@link SolrQueryTimeoutImpl#shouldExit}) will result in underflow, + * and checking the sign of the result of that subtraction (via + * comparison to zero) will correctly indicate whether the future + * nanoTime() value has exceeded the timeoutAt value. + * p + * See {@link System#nanoTime} + */ @Override protected Long initialValue() { return nanoTime() + Long.MAX_VALUE; {code} Initial value of thread local in SolrQueryTimeoutImpl overflows a long -- Key: SOLR-7419 URL: https://issues.apache.org/jira/browse/SOLR-7419 Project: Solr Issue Type: Bug Components: search Affects Versions: 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Assignee: Steve Rowe Priority: Trivial Fix For: Trunk, 5.2 Same as the title. {code} /** * The ThreadLocal variable to store the time beyond which, the processing should exit. */ public static ThreadLocalLong timeoutAt = new ThreadLocalLong() { @Override protected Long initialValue() { return nanoTime() + Long.MAX_VALUE; } }; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6665) ZkController.publishAndWaitForDownStates should not use core name
[ https://issues.apache.org/jira/browse/SOLR-6665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shalin Shekhar Mangar updated SOLR-6665: Attachment: SOLR-6665.patch Fix and test to demonstrate the bug. Since the current code uses a core name and doesn't check for the node name in the verification phase, it can easily be fooled if a replica with the same core name existed on a different node. The test asserts that this method times out. It makes the test always take at least 60s (the timeout value for the method) but I can't find a better way. ZkController.publishAndWaitForDownStates should not use core name - Key: SOLR-6665 URL: https://issues.apache.org/jira/browse/SOLR-6665 Project: Solr Issue Type: Bug Components: SolrCloud Affects Versions: 4.10.1 Reporter: Shalin Shekhar Mangar Priority: Minor Fix For: 5.0, Trunk Attachments: SOLR-6665.patch ZkController.publishAndWaitForDownStates uses a ListString to keep track of all core names that have been published as down. It should use a set of coreNodeNames instead of core names for correctness. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-6220) Replica placement strategy for solrcloud
[ https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Noble Paul updated SOLR-6220: - Description: h1.Objective Most cloud based systems allow to specify rules on how the replicas/nodes of a cluster are allocated . Solr should have a flexible mechanism through which we should be able to control allocation of replicas or later change it to suit the needs of the system All configurations are per collection basis. The rules are applied whenever a replica is created in any of the shards in a given collection during * collection creation * shard splitting * add replica * createsshard There are two aspects to how replicas are placed: snitch and placement. h2.snitch How to identify the tags of nodes. Snitches are configured through collection create command with the snitch prefix . eg: snitch.type=EC2Snitch. The system provides the following implicit tag names which cannot be used by other snitches * node : The solr nodename * host : The hostname * ip : The ip address of the host * cores : This is a dynamic varibale which gives the core count at any given point * disk : This is a dynamic variable which gives the available disk space at any given point There will a few snitches provided by the system such as h3.EC2Snitch Provides two tags called dc, rack from the region and zone values in EC2 h3.IPSnitch Use the IP to infer the “dc” and “rack” values h3.NodePropertySnitch This lets users provide system properties to each node with tagname and value . example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this particular node will have two tags “tag-x” and “tag-y” . h3.RestSnitch Which lets the user configure a url which the server can invoke and get all the tags for a given node. This takes extra parameters in create command example: {{snitch={type=RestSnitch,url=http://snitchserverhost:port/[node]}} The response of the rest call {{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}} must be in json format eg: {code:JavaScript} { “tag-x”:”x-val”, “tag-y”:”y-val” } {code} h3.ManagedSnitch This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The user should be able to manage the tags and values of each node through a collection API h2.Rules This tells how many replicas for a given shard needs to be assigned to nodes with the given key value pairs. These parameters will be passed on to the collection CREATE api as a multivalued parameter rule . The values will be saved in the state of the collection as follows {code:Javascript} { “mycollection”:{ “snitch”: { class:“ImplicitTagsSnitch” } “rules”:[{cores:4-}, {replica:1 ,shard :* ,node:*}, {disk:1+}] } {code} A rule is specified as a pseudo JSON syntax . which is a map of keys and values *Each collection can have any number of rules. As long as the rules do not conflict with each other it should be OK. Or else an error is thrown * In each rule , shard and replica can be omitted ** default value of replica is {{\*}} means ANY or you can specify a count and an operand such as {{+}} or {{-}} ** and the value of shard can be a shard name or {{\*}} means EACH or {{**}} means ANY. default value is {{\*\*}} (ANY) * There should be exactly one extra condition in a rule other than {{shard}} and {{replica}}. * all keys other than {{shard}} and {{replica}} are called tags and the tags are nothing but values provided by the snitch for each node * By default certain tags such as {{node}}, {{host}}, {{port}} are provided by the system implicitly Examples: {noformat} //in each rack there can be max two replicas of A given shard {rack:*,shard:*,replica:2-} //in each rack there can be max two replicas of ANY replica {rack:*,shard:**,replica:2-} {rack:*,replica:2-} //in each node there should be a max one replica of EACH shard {node:*,shard:*,replica:1-} //in each node there should be a max one replica of ANY shard {node:*,shard:**,replica:1-} {node:*,replica:1-} //In rack 738 and shard=shard1, there can be a max 0 replica {rack:738,shard:shard1,replica:0-} //All replicas of shard1 should go to rack 730 {shard:shard1,replica:*,rack:730} {shard:shard1,rack:730} // all replicas must be created in a node with at least 20GB disk {replica:*,shard:*,disk:20+} {replica:*,disk:20+} {disk:20+} // All replicas should be created in nodes with less than 5 cores //In this ANY AND each for shard have same meaning {replica:*,shard:**,cores:5-} {replica:*,cores:5-} {cores:5-} //one replica of shard1 must go to node 192.168.1.2:8080_solr {node:”192.168.1.2:8080_solr”, shard:shard1, replica:1} //No replica of shard1 should go to rack 738 {rack:!738,shard:shard1,replica:*} {rack:!738,shard:shard1} //No replica of ANY shard should go to rack 738 {rack:!738,shard:**,replica:*} {rack:!738,shard:*} {rack:!738}
[jira] [Comment Edited] (SOLR-6220) Replica placement strategy for solrcloud
[ https://issues.apache.org/jira/browse/SOLR-6220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502881#comment-14502881 ] Noble Paul edited comment on SOLR-6220 at 4/20/15 2:15 PM: --- The rules are passed as request params . example : {{rule=shard:*,disk=20+}} I considered using {{=}} , {{=}} and {{!=}} . But , that means the user needs to escape {{=}} in the request. Which can badly affect the readability was (Author: noble.paul): The rules are passed as request params . example : {{rule=shard:*,disk=20+}} I considered using {{=}} or {{=}} . But , that means the user needs to escape {{=}} in the request. Which can badly affect the readability Replica placement strategy for solrcloud Key: SOLR-6220 URL: https://issues.apache.org/jira/browse/SOLR-6220 Project: Solr Issue Type: Bug Components: SolrCloud Reporter: Noble Paul Assignee: Noble Paul Attachments: SOLR-6220.patch, SOLR-6220.patch, SOLR-6220.patch h1.Objective Most cloud based systems allow to specify rules on how the replicas/nodes of a cluster are allocated . Solr should have a flexible mechanism through which we should be able to control allocation of replicas or later change it to suit the needs of the system All configurations are per collection basis. The rules are applied whenever a replica is created in any of the shards in a given collection during * collection creation * shard splitting * add replica * createsshard There are two aspects to how replicas are placed: snitch and placement. h2.snitch How to identify the tags of nodes. Snitches are configured through collection create command with the snitch prefix . eg: snitch.type=EC2Snitch. The system provides the following implicit tag names which cannot be used by other snitches * node : The solr nodename * host : The hostname * ip : The ip address of the host * cores : This is a dynamic varibale which gives the core count at any given point * disk : This is a dynamic variable which gives the available disk space at any given point There will a few snitches provided by the system such as h3.EC2Snitch Provides two tags called dc, rack from the region and zone values in EC2 h3.IPSnitch Use the IP to infer the “dc” and “rack” values h3.NodePropertySnitch This lets users provide system properties to each node with tagname and value . example : -Dsolrcloud.snitch.vals=tag-x:val-a,tag-y:val-b. This means this particular node will have two tags “tag-x” and “tag-y” . h3.RestSnitch Which lets the user configure a url which the server can invoke and get all the tags for a given node. This takes extra parameters in create command example: {{snitch={type=RestSnitch,url=http://snitchserverhost:port/[node]}} The response of the rest call {{http://snitchserverhost:port/?nodename=192.168.1:8080_solr}} must be in json format eg: {code:JavaScript} { “tag-x”:”x-val”, “tag-y”:”y-val” } {code} h3.ManagedSnitch This snitch keeps a list of nodes and their tag value pairs in Zookeeper. The user should be able to manage the tags and values of each node through a collection API h2.Rules This tells how many replicas for a given shard needs to be assigned to nodes with the given key value pairs. These parameters will be passed on to the collection CREATE api as a multivalued parameter rule . The values will be saved in the state of the collection as follows {code:Javascript} { “mycollection”:{ “snitch”: { class:“ImplicitTagsSnitch” } “rules”:[{cores:4-}, {replica:1 ,shard :* ,node:*}, {disk:1+}] } {code} A rule is specified as a pseudo JSON syntax . which is a map of keys and values *Each collection can have any number of rules. As long as the rules do not conflict with each other it should be OK. Or else an error is thrown * In each rule , shard and replica can be omitted ** default value of replica is {{\*}} means ANY or you can specify a count and an operand such as {{+}} or {{-}} ** and the value of shard can be a shard name or {{\*}} means EACH or {{**}} means ANY. default value is {{\*\*}} (ANY) * There should be exactly one extra condition in a rule other than {{shard}} and {{replica}}. * all keys other than {{shard}} and {{replica}} are called tags and the tags are nothing but values provided by the snitch for each node * By default certain tags such as {{node}}, {{host}}, {{port}} are provided by the system implicitly Examples: {noformat} //in each rack there can be max two replicas of A given shard {rack:*,shard:*,replica:2-} //in each rack there can be max two replicas of ANY replica {rack:*,shard:**,replica:2-} {rack:*,replica:2-} //in each
[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_40) - Build # 4704 - Still Failing!
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4704/ Java: 64bit/jdk1.8.0_40 -XX:-UseCompressedOops -XX:+UseParallelGC 1 tests failed. FAILED: org.apache.solr.client.solrj.impl.CloudSolrClientTest.test Error Message: Error from server at http://127.0.0.1:62293/checkStateVerCol: no servers hosting shard: Stack Trace: org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error from server at http://127.0.0.1:62293/checkStateVerCol: no servers hosting shard: at __randomizedtesting.SeedInfo.seed([2E526DF0956F451C:A606522A3B9328E4]:0) at org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:235) at org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:227) at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943) at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.stateVersionParamTest(CloudSolrClientTest.java:554) at org.apache.solr.client.solrj.impl.CloudSolrClientTest.test(CloudSolrClientTest.java:127) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627) at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836) at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872) at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886) at org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960) at org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49) at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65) at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365) at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798) at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458) at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845) at com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747) at com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781) at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39) at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) at
[jira] [Commented] (SOLR-7419) Initial value of thread local in SolrQueryTimeoutImpl overflows a long
[ https://issues.apache.org/jira/browse/SOLR-7419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14502910#comment-14502910 ] ASF subversion and git services commented on SOLR-7419: --- Commit 1674866 from [~steve_rowe] in branch 'dev/trunk' [ https://svn.apache.org/r1674866 ] SOLR-7419: document intentional overflow in SolrQueryTimeoutImpl thread local Initial value of thread local in SolrQueryTimeoutImpl overflows a long -- Key: SOLR-7419 URL: https://issues.apache.org/jira/browse/SOLR-7419 Project: Solr Issue Type: Bug Components: search Affects Versions: 4.10.4, 5.1 Reporter: Shalin Shekhar Mangar Priority: Trivial Fix For: Trunk, 5.2 Same as the title. {code} /** * The ThreadLocal variable to store the time beyond which, the processing should exit. */ public static ThreadLocalLong timeoutAt = new ThreadLocalLong() { @Override protected Long initialValue() { return nanoTime() + Long.MAX_VALUE; } }; {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (LUCENE-6439) Create test-framework/src/test
[ https://issues.apache.org/jira/browse/LUCENE-6439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14503251#comment-14503251 ] ASF subversion and git services commented on LUCENE-6439: - Commit 1674946 from [~rcmuir] in branch 'dev/trunk' [ https://svn.apache.org/r1674946 ] LUCENE-6439: Create test-framework/src/test Create test-framework/src/test -- Key: LUCENE-6439 URL: https://issues.apache.org/jira/browse/LUCENE-6439 Project: Lucene - Core Issue Type: Test Reporter: Robert Muir Attachments: LUCENE-6439.patch We have quite a few tests (~30 suites) for test-framework stuff (test-the-tester) but currently they all sit in lucene/core housed with real tests. I think we should just give test-framework a src/test and move these tests there. This makes the build simpler in the future too, because its less special. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (LUCENE-6444) ant nightly-smoke fails in trunk
Robert Muir created LUCENE-6444: --- Summary: ant nightly-smoke fails in trunk Key: LUCENE-6444 URL: https://issues.apache.org/jira/browse/LUCENE-6444 Project: Lucene - Core Issue Type: Bug Reporter: Robert Muir I don't know the last time this was run by jenkins, but: {noformat} [smoker] Verify... [smoker] confirm all releases have coverage in TestBackwardsCompatibility [smoker] find all past Lucene releases... [smoker] run TestBackwardsCompatibility.. [smoker] Releases that don't seem to be tested: [smoker] 4.10.4 {noformat} And i don't see any unsupported-4.10.4-cfs/nocfs.zip in the backwards-codec/ module (to test we do the right thing), so I think the failure is correct. I will fix this a little bit later if nobody beats me to it. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7406) Support DV implementation in range faceting
[ https://issues.apache.org/jira/browse/SOLR-7406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tomás Fernández Löbbe updated SOLR-7406: Attachment: SOLR-7406.patch New patch with some more tests with multiValued=true/false. Added tests for bad requests. [~shalinmangar], I think this patch conflicts with your work in SOLR-4212, would you mind taking a quick look? I think this patch is mostly ready. Support DV implementation in range faceting --- Key: SOLR-7406 URL: https://issues.apache.org/jira/browse/SOLR-7406 Project: Solr Issue Type: Improvement Reporter: Tomás Fernández Löbbe Assignee: Tomás Fernández Löbbe Fix For: Trunk Attachments: SOLR-7406.patch, SOLR-7406.patch, SOLR-7406.patch interval faceting has a different implementation than range faceting based on DocValues API. This is sometimes faster and doesn't rely on filters / filter cache. I'm planning to add a method parameter that would allow users to chose between the current implementation (filter?) and the DV-based implementation (dv?). The result for both methods should be the same, but performance may vary. Default should continue to be filter. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org