[jira] [Commented] (LUCENE-6961) Improve Exception handling in AnalysisFactory/SPI loader

2016-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082621#comment-15082621
 ] 

ASF subversion and git services commented on LUCENE-6961:
-

Commit 1723014 from [~thetaphi] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1723014 ]

Merged revision(s) 1723013 from lucene/dev/trunk:
LUCENE-6961: Add exception message; make method private

> Improve Exception handling in AnalysisFactory/SPI loader
> 
>
> Key: LUCENE-6961
> URL: https://issues.apache.org/jira/browse/LUCENE-6961
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6961.patch
>
>
> Currently the AnalysisSPILoader used by AbstractAnalysisFactory uses a 
> {{catch Exception}} block when invoking the constructor. If the constructor 
> throws stuff like IllegalArgumentExceptions or similar, this is hidden inside 
> InvocationTargetException, which gets wrapped in IllegalArgumentException. 
> This is not useful.
> This patch will:
> - Only catch ReflectiveOperationException
> - If it is InvocationTargetException it will rethrow the cause, if it is 
> unchecked. Otherwise it will wrap in RuntimeException
> - If the constructor cannot be called at all (reflective access denied, 
> method not found,...) UOE is thrown with explaining message.
> This patch will be required by next version of LUCENE-6958.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6961) Improve Exception handling in AnalysisFactory/SPI loader

2016-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082620#comment-15082620
 ] 

ASF subversion and git services commented on LUCENE-6961:
-

Commit 1723013 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1723013 ]

LUCENE-6961: Add exception message; make method private

> Improve Exception handling in AnalysisFactory/SPI loader
> 
>
> Key: LUCENE-6961
> URL: https://issues.apache.org/jira/browse/LUCENE-6961
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6961.patch
>
>
> Currently the AnalysisSPILoader used by AbstractAnalysisFactory uses a 
> {{catch Exception}} block when invoking the constructor. If the constructor 
> throws stuff like IllegalArgumentExceptions or similar, this is hidden inside 
> InvocationTargetException, which gets wrapped in IllegalArgumentException. 
> This is not useful.
> This patch will:
> - Only catch ReflectiveOperationException
> - If it is InvocationTargetException it will rethrow the cause, if it is 
> unchecked. Otherwise it will wrap in RuntimeException
> - If the constructor cannot be called at all (reflective access denied, 
> method not found,...) UOE is thrown with explaining message.
> This patch will be required by next version of LUCENE-6958.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6958) Improve CustomAnalyzer to also allow to specify factory directly (for compile-time safety)

2016-01-05 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082636#comment-15082636
 ] 

Uwe Schindler commented on LUCENE-6958:
---

Thanks Shai; I will change the patch a bit and upload a new one based on 
LUCENE-6961.

> Improve CustomAnalyzer to also allow to specify factory directly (for 
> compile-time safety)
> --
>
> Key: LUCENE-6958
> URL: https://issues.apache.org/jira/browse/LUCENE-6958
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5
>
> Attachments: LUCENE-6958.patch
>
>
> Currently CustomAnalyzer only allows to specify the SPI names of factories. 
> As the fluent builder pattern is mostly used inside Java code, it is better 
> for type safety to optionally also specify the factory class directly (using 
> compile-time safe patterns like 
> {{.withTokenizer(WhitespaceTokenizerFactory.class)}}). With the string names, 
> you get the error only at runtime. Of course this does not help with wrong, 
> spelled parameter names, but it also has the side effect that you can click 
> on the class name in your code to get javadocs with the parameter names.
> This issue will add this functionality and update the docs/example.
> Thanks to [~shaie] for suggesting this!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6958) Improve CustomAnalyzer to also allow to specify factory directly (for compile-time safety)

2016-01-05 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082627#comment-15082627
 ] 

Shai Erera commented on LUCENE-6958:


+1 to commit. Thanks Uwe!

> Improve CustomAnalyzer to also allow to specify factory directly (for 
> compile-time safety)
> --
>
> Key: LUCENE-6958
> URL: https://issues.apache.org/jira/browse/LUCENE-6958
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5
>
> Attachments: LUCENE-6958.patch
>
>
> Currently CustomAnalyzer only allows to specify the SPI names of factories. 
> As the fluent builder pattern is mostly used inside Java code, it is better 
> for type safety to optionally also specify the factory class directly (using 
> compile-time safe patterns like 
> {{.withTokenizer(WhitespaceTokenizerFactory.class)}}). With the string names, 
> you get the error only at runtime. Of course this does not help with wrong, 
> spelled parameter names, but it also has the side effect that you can click 
> on the class name in your code to get javadocs with the parameter names.
> This issue will add this functionality and update the docs/example.
> Thanks to [~shaie] for suggesting this!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6960) TestUninvertingReader.testFieldInfos() failure

2016-01-05 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated LUCENE-6960:
-
Attachment: LUCENE-6960.patch

Thanks for reporting this, Steve. It seems to me (and I may be wrong) that the 
attribute PerFieldDocValuesFormat.Suffix is added (or not added) to FieldInfos 
for docValues fields depending on the codec chosen at random; most test seeds 
seem to be adding it, the reported ones didn't.

Hence, this test for attributes being passed on to the new UninvertingReader is 
not reliable, and hence I've removed it in this patch.

> TestUninvertingReader.testFieldInfos() failure
> --
>
> Key: LUCENE-6960
> URL: https://issues.apache.org/jira/browse/LUCENE-6960
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 5.5, Trunk
>Reporter: Steve Rowe
> Attachments: LUCENE-6960.patch
>
>
> My Jenkins found a reproducible seed for 
> {{TestUninvertingReader.testFieldInfos()}} - fails on both branch_5x and 
> trunk:
> {noformat}
>[junit4] Suite: org.apache.lucene.uninverting.TestUninvertingReader
>[junit4]   2> NOTE: download the large Jenkins line-docs file by running 
> 'ant get-jenkins-line-docs' in the lucene directory.
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestUninvertingReader -Dtests.method=testFieldInfos 
> -Dtests.seed=349A6776161E26B5 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr_ME -Dtests.timezone=US/Indiana-Starke -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.31s | TestUninvertingReader.testFieldInfos <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<0> but 
> was:
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([349A6776161E26B5:24AE58B19B3CAD1C]:0)
>[junit4]>at 
> org.apache.lucene.uninverting.TestUninvertingReader.testFieldInfos(TestUninvertingReader.java:385)
>[junit4]>at java.lang.Thread.run(Thread.java:745)
>[junit4]   2> NOTE: test params are: codec=SimpleText, 
> sim=ClassicSimilarity, locale=sr_ME, timezone=US/Indiana-Starke
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_45 (64-bit)/cpus=16,threads=1,free=412590336,total=514850816
>[junit4]   2> NOTE: All tests run in this JVM: [TestUninvertingReader]
>[junit4] Completed [1/1 (1!)] in 0.47s, 1 test, 1 failure <<< FAILURES!
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1066 - Still Failing

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1066/

All tests passed

Build Log:
[...truncated 11006 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/temp/junit4-J1-20160105_082407_512.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] Dumping heap to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/heapdumps/java_pid4782.hprof
 ...
   [junit4] Heap dump file created [688171648 bytes in 8.643 secs]
   [junit4] <<< JVM J1: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/temp/junit4-J1-20160105_082407_512.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] WARN: Unhandled exception in event serialization. -> 
java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] <<< JVM J1: EOF 

[...truncated 915 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/x1/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7/jre/bin/java 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/heapdumps
 -XX:MaxPermSize=192m -ea -esa -Dtests.prefix=tests 
-Dtests.seed=89100D3D531CBEFD -Xmx512M -Dtests.iters= -Dtests.verbose=false 
-Dtests.infostream=false -Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random 
-Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.luceneMatchVersion=5.5.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/tools/junit4/logging.properties
 -Dtests.nightly=true -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=2 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/temp
 
-Dcommon.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene
 
-Dclover.db.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build/clover/db
 
-Djava.security.policy=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=5.5.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Djunit4.childvm.cwd=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/test/J1
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=UTF-8 -classpath 

[jira] [Commented] (SOLR-8459) NPE using TermVectorComponent in combinition with ExactStatsCache

2016-01-05 Thread Andreas Daffner (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082798#comment-15082798
 ] 

Andreas Daffner commented on SOLR-8459:
---

thanks a lot for your fix!
is this bugfix already included in the new solr 5.4?

> NPE using TermVectorComponent in combinition with ExactStatsCache
> -
>
> Key: SOLR-8459
> URL: https://issues.apache.org/jira/browse/SOLR-8459
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Andreas Daffner
> Attachments: SOLR-8459.patch
>
>
> Hello,
> I am getting a NPE when using the TermVectorComponent in combinition with 
> ExactStatsCache.
> I am using SOLR 5.3.0 with 4 shards in total.
> I set up my solrconfig.xml as described in these 2 links:
> TermVectorComponent:
> https://cwiki.apache.org/confluence/display/solr/The+Term+Vector+Component
> ExactStatsCache:
> https://cwiki.apache.org/confluence/display/solr/Distributed+Requests#Configuring+statsCache+implementation
> My snippets from solrconfig.xml:
> {code}
> ...
>   
>   
>   
>class="org.apache.solr.handler.component.TermVectorComponent"/>
>class="org.apache.solr.handler.component.SearchHandler">
> 
>   true
> 
> 
>   tvComponent
> 
>   
> ...
> {code}
> Unfortunately a request to SOLR like 
> "http://host/solr/corename/tvrh?q=site_url_id:74; ends up with this NPE:
> {code}
> 4329458 ERROR (qtp59559151-17) [c:SingleDomainSite_11 s:shard1 r:core_node1 
> x:SingleDomainSite_11_shard1_replica1] o.a.s.c.SolrCore 
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:454)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:416)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> According to https://issues.apache.org/jira/browse/SOLR-7756 this Bug should 
> be fixed with SOLR 5.3.0, but obviously this NPE is still present.
> Can you please help me here?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6958) Improve CustomAnalyzer to also allow to specify factory directly (for compile-time safety)

2016-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082736#comment-15082736
 ] 

ASF subversion and git services commented on LUCENE-6958:
-

Commit 1723028 from [~thetaphi] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1723028 ]

Merged revision(s) 1723027 from lucene/dev/trunk:
LUCENE-6958: Improve CustomAnalyzer to take class references to factories as 
alternative to their SPI name. This enables compile-time safety when defining 
analyzer's components

> Improve CustomAnalyzer to also allow to specify factory directly (for 
> compile-time safety)
> --
>
> Key: LUCENE-6958
> URL: https://issues.apache.org/jira/browse/LUCENE-6958
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5
>
> Attachments: LUCENE-6958.patch, LUCENE-6958.patch
>
>
> Currently CustomAnalyzer only allows to specify the SPI names of factories. 
> As the fluent builder pattern is mostly used inside Java code, it is better 
> for type safety to optionally also specify the factory class directly (using 
> compile-time safe patterns like 
> {{.withTokenizer(WhitespaceTokenizerFactory.class)}}). With the string names, 
> you get the error only at runtime. Of course this does not help with wrong, 
> spelled parameter names, but it also has the side effect that you can click 
> on the class name in your code to get javadocs with the parameter names.
> This issue will add this functionality and update the docs/example.
> Thanks to [~shaie] for suggesting this!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8459) NPE using TermVectorComponent in combinition with ExactStatsCache

2016-01-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082816#comment-15082816
 ] 

Cao Manh Dat commented on SOLR-8459:


Not yet. I'm waiting for committers to commit this patch to trunk.

> NPE using TermVectorComponent in combinition with ExactStatsCache
> -
>
> Key: SOLR-8459
> URL: https://issues.apache.org/jira/browse/SOLR-8459
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Andreas Daffner
> Attachments: SOLR-8459.patch
>
>
> Hello,
> I am getting a NPE when using the TermVectorComponent in combinition with 
> ExactStatsCache.
> I am using SOLR 5.3.0 with 4 shards in total.
> I set up my solrconfig.xml as described in these 2 links:
> TermVectorComponent:
> https://cwiki.apache.org/confluence/display/solr/The+Term+Vector+Component
> ExactStatsCache:
> https://cwiki.apache.org/confluence/display/solr/Distributed+Requests#Configuring+statsCache+implementation
> My snippets from solrconfig.xml:
> {code}
> ...
>   
>   
>   
>class="org.apache.solr.handler.component.TermVectorComponent"/>
>class="org.apache.solr.handler.component.SearchHandler">
> 
>   true
> 
> 
>   tvComponent
> 
>   
> ...
> {code}
> Unfortunately a request to SOLR like 
> "http://host/solr/corename/tvrh?q=site_url_id:74; ends up with this NPE:
> {code}
> 4329458 ERROR (qtp59559151-17) [c:SingleDomainSite_11 s:shard1 r:core_node1 
> x:SingleDomainSite_11_shard1_replica1] o.a.s.c.SolrCore 
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:454)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:416)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> According to https://issues.apache.org/jira/browse/SOLR-7756 this Bug should 
> be fixed with SOLR 5.3.0, but obviously this NPE is still present.
> Can you please help me here?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8459) NPE using TermVectorComponent in combinition with ExactStatsCache

2016-01-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082817#comment-15082817
 ] 

Cao Manh Dat commented on SOLR-8459:


Not yet. I'm waiting for committers to commit this patch to trunk.

> NPE using TermVectorComponent in combinition with ExactStatsCache
> -
>
> Key: SOLR-8459
> URL: https://issues.apache.org/jira/browse/SOLR-8459
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Andreas Daffner
> Attachments: SOLR-8459.patch
>
>
> Hello,
> I am getting a NPE when using the TermVectorComponent in combinition with 
> ExactStatsCache.
> I am using SOLR 5.3.0 with 4 shards in total.
> I set up my solrconfig.xml as described in these 2 links:
> TermVectorComponent:
> https://cwiki.apache.org/confluence/display/solr/The+Term+Vector+Component
> ExactStatsCache:
> https://cwiki.apache.org/confluence/display/solr/Distributed+Requests#Configuring+statsCache+implementation
> My snippets from solrconfig.xml:
> {code}
> ...
>   
>   
>   
>class="org.apache.solr.handler.component.TermVectorComponent"/>
>class="org.apache.solr.handler.component.SearchHandler">
> 
>   true
> 
> 
>   tvComponent
> 
>   
> ...
> {code}
> Unfortunately a request to SOLR like 
> "http://host/solr/corename/tvrh?q=site_url_id:74; ends up with this NPE:
> {code}
> 4329458 ERROR (qtp59559151-17) [c:SingleDomainSite_11 s:shard1 r:core_node1 
> x:SingleDomainSite_11_shard1_replica1] o.a.s.c.SolrCore 
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:454)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:416)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> According to https://issues.apache.org/jira/browse/SOLR-7756 this Bug should 
> be fixed with SOLR 5.3.0, but obviously this NPE is still present.
> Can you please help me here?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Breaking Java back-compat in Solr

2016-01-05 Thread Mark Miller
We have never promised or delivered back compat on Java APIs beyond best
effort. I agree we should stick to http APIs and solrj for stronger back
compat, and simply do our best to be reasonable with the rest.

Mark
On Tue, Jan 5, 2016 at 2:05 AM Shai Erera  wrote:

> In Lucene there are three types of public APIs:
>
> "public stable": not annotated and for them we try to maintain backward
> compatibility across minor releases
> "public experimental": annotated with @lucene.experimental. If possible,
> back-compat is maintained, but otherwise you can expect them to change
> between minor versions
> "public internal": annotated with @lucene.internal. Due to Java
> limitations, these are public classes/methods, but we have no intention to
> keep them backward compatible.
>
> Even the "public stable" APIs are further divided, though not through
> annotations, and more a feel and consensus on a particular issue, into
> expert and non-expert APIs. Take for example the overhaul that was done to
> SpanQuery in LUCENE-6308. I believe it was decided that users who implement
> their own SpanQuery, are probably expert users and can take the API changes.
>
> When it comes to Solr, I always viewed user-facing APIs as SolrJ and the
> REST APIs and therefore referred to them as "public stable". When it comes
> to plug-ins, I view them at most as "public expert". I don't think that
> Solr users who write their own plug-ins should be viewed as simple users.
> They are more than the average user IMO. Even if the plug-in that they
> write is super simple, like adding a log message, how bad can an API break
> be for them to change their code?
>
> I think that, even if not publicly announced, in Lucene we sort of dropped
> the requirement of jar drop-in ability. I personally believe that there are
> only few users who expect to upgrade a search server running Solr from
> version X to Y, without this affecting their code. So for the majority of
> users who use SolrJ and the REST APIs, this will be the case since we
> guarantee back-compat. For those who write their own plug-ins, it may be a
> matter of luck. Either they do or don't have to recompile their code. For
> the super expert ones, e.g. who write a plug-in from scratch or do really
> invasive things, I think it's fair to ask them to align with the new APIs.
>
> I implemented a bunch of SpanQuery who worked fine in 5.2.1. Last week
> I've decided to upgrade to 5.4 and had to rewrite the queries. In the
> process, I discovered that the new API helps me implement my queries
> better, as well one of the queries was now available from Lucene. I didn't
> complain about it, as painful as it was, since I trust the developers who
> have made that decision to weigh the pros and cons of the API changes.
>
> Also, I find it odd that some of the arguments made here distinguish Solr
> users from Lucene users. Ain't a Solr user who implements his own
> QParsePlugin subject to the same API changes (e.g. SpanQuery) in Lucene,
> that non-Solr users are? Why do we even attempt to make that distinction?
> Solr is built on top of Lucene, so most likely the plugin that you write
> will need to interact w/ Lucene API too.
>
> As to the particular issue in SOLR-8475, the API break will be resolved by
> any modern IDE with a simple "Organize Imports". And I don't view
> SolrIndexSearcher as anywhere near the top-level of the APIs that we should
> maintain. Nor do I think that SolrCache is one such (as another example,
> unrelated to that issue). The interfaces that define plugins, maybe. But
> the internals are subject, IMO, to either be "solr expert" or "solr
> internal".
>
> If we want to refactor those pieces in Solr (and I think that we should,
> and I try to help with it), we must be more flexible around our API
> guarantees. Also, trust the devs to not rewrite APIs just for the hack of
> it, but if they do, it's for a good reason, for them (as code maintainers)
> and our users.
>
> Shai
>
> On Tue, Jan 5, 2016 at 8:36 AM Noble Paul  wrote:
>
>> I would say, SolrJ and REST APIs MUST BE backward  compatible between
>> minor versions.
>>
>> The question is about the internal java APIs. It is impossible to get
>> 100% right on these things. If we can start annotating classes/methods
>> and let users suggest stuff then we should be in a reasonably good
>> situation over an year.
>> As a first step open a ticket and define a process to make a certain
>> API backcompat .
>> may be an annotation or whatever.
>>
>> On Tue, Jan 5, 2016 at 11:19 AM, Anshum Gupta 
>> wrote:
>> > Thanks David,
>> >
>> > I agree with what you've suggested but the bigger question here again is
>> > *which* files do we guarantee back-compat for. I suggest we guarantee
>> > back-compat for SolrJ and REST APIs. For everything else i.e. Java
>> APIs, we
>> > should try and maintain back-compat but there shouldn't be a guarantee
>> and
>> > should be the 

[jira] [Resolved] (LUCENE-6958) Improve CustomAnalyzer to also allow to specify factory directly (for compile-time safety)

2016-01-05 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-6958.
---
   Resolution: Fixed
Fix Version/s: Trunk

> Improve CustomAnalyzer to also allow to specify factory directly (for 
> compile-time safety)
> --
>
> Key: LUCENE-6958
> URL: https://issues.apache.org/jira/browse/LUCENE-6958
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE-6958.patch, LUCENE-6958.patch
>
>
> Currently CustomAnalyzer only allows to specify the SPI names of factories. 
> As the fluent builder pattern is mostly used inside Java code, it is better 
> for type safety to optionally also specify the factory class directly (using 
> compile-time safe patterns like 
> {{.withTokenizer(WhitespaceTokenizerFactory.class)}}). With the string names, 
> you get the error only at runtime. Of course this does not help with wrong, 
> spelled parameter names, but it also has the side effect that you can click 
> on the class name in your code to get javadocs with the parameter names.
> This issue will add this functionality and update the docs/example.
> Thanks to [~shaie] for suggesting this!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6958) Improve CustomAnalyzer to also allow to specify factory directly (for compile-time safety)

2016-01-05 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-6958:
--
Attachment: LUCENE-6958.patch

New patch with better exception handling based on linked issue.

> Improve CustomAnalyzer to also allow to specify factory directly (for 
> compile-time safety)
> --
>
> Key: LUCENE-6958
> URL: https://issues.apache.org/jira/browse/LUCENE-6958
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5
>
> Attachments: LUCENE-6958.patch, LUCENE-6958.patch
>
>
> Currently CustomAnalyzer only allows to specify the SPI names of factories. 
> As the fluent builder pattern is mostly used inside Java code, it is better 
> for type safety to optionally also specify the factory class directly (using 
> compile-time safe patterns like 
> {{.withTokenizer(WhitespaceTokenizerFactory.class)}}). With the string names, 
> you get the error only at runtime. Of course this does not help with wrong, 
> spelled parameter names, but it also has the side effect that you can click 
> on the class name in your code to get javadocs with the parameter names.
> This issue will add this functionality and update the docs/example.
> Thanks to [~shaie] for suggesting this!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6958) Improve CustomAnalyzer to also allow to specify factory directly (for compile-time safety)

2016-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6958?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082730#comment-15082730
 ] 

ASF subversion and git services commented on LUCENE-6958:
-

Commit 1723027 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1723027 ]

LUCENE-6958: Improve CustomAnalyzer to take class references to factories as 
alternative to their SPI name. This enables compile-time safety when defining 
analyzer's components

> Improve CustomAnalyzer to also allow to specify factory directly (for 
> compile-time safety)
> --
>
> Key: LUCENE-6958
> URL: https://issues.apache.org/jira/browse/LUCENE-6958
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Affects Versions: 5.4
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
> Fix For: 5.5
>
> Attachments: LUCENE-6958.patch, LUCENE-6958.patch
>
>
> Currently CustomAnalyzer only allows to specify the SPI names of factories. 
> As the fluent builder pattern is mostly used inside Java code, it is better 
> for type safety to optionally also specify the factory class directly (using 
> compile-time safe patterns like 
> {{.withTokenizer(WhitespaceTokenizerFactory.class)}}). With the string names, 
> you get the error only at runtime. Of course this does not help with wrong, 
> spelled parameter names, but it also has the side effect that you can click 
> on the class name in your code to get javadocs with the parameter names.
> This issue will add this functionality and update the docs/example.
> Thanks to [~shaie] for suggesting this!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083141#comment-15083141
 ] 

Mark Miller commented on SOLR-8453:
---

I think it's more related to using HttpClient than http. We see random 
connection resets in many tests that go away with this, but looking at the 
consistent test we have that fails (SolrExampleStreamingTest#testUpdateField), 
we seem to hit the problem when HttpClient is cleaning up and closing the 
outputstream, which flushes a buffer.

{code}
java.net.SocketException: Connection reset
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
at java.net.SocketOutputStream.write(SocketOutputStream.java:153)
at 
org.apache.http.impl.io.AbstractSessionOutputBuffer.flushBuffer(AbstractSessionOutputBuffer.java:159)
at 
org.apache.http.impl.io.AbstractSessionOutputBuffer.flush(AbstractSessionOutputBuffer.java:166)
at 
org.apache.http.impl.io.ChunkedOutputStream.close(ChunkedOutputStream.java:205)
at 
org.apache.http.impl.entity.EntitySerializer.serialize(EntitySerializer.java:118)
at 
org.apache.http.impl.AbstractHttpClientConnection.sendRequestEntity(AbstractHttpClientConnection.java:265)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.sendRequestEntity(ManagedClientConnectionImpl.java:203)
at 
org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:237)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:122)
at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:685)
at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:487)
at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.sendUpdateStream(ConcurrentUpdateSolrClient.java:280)
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrClient$Runner.run(ConcurrentUpdateSolrClient.java:161)
{code}

It may also be that it only happens with this chunked encoding. On close, it 
tries to write a 'closing chunk' and then flush: 
https://github.com/apache/httpcore/blob/4.0.x/httpcore/src/main/java/org/apache/http/impl/io/ChunkedOutputStream.java

If there is a problem here we get the connection reset.

It does actually seem like a bit of a race to me and I'm not sure how to 
address that yet (other than this patch). If you remove the 250ms poll that 
happens in 
ConcurrentUpdateSolrClient->sendUpdateStream->EntityTemplate->writeTo, it seems 
to go away. But that would indicate our connection management is a bit fragile, 
with the client kind of racing the server.

Still playing around to try and find other potential fixes.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our 

[jira] [Updated] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8453:
---
Attachment: SOLR-8453_test.patch

Here's a test with normal solrj clients that reproduces HTTP level exceptions.  
It uses multiple threads and large request sizes.

Example exception summary (from 10 clients):
{code}
3567 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->SocketException(Connection reset)
3569 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3569 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3570 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3571 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3571 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3571 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3572 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3572 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3572 ERROR (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.c.s.TestSolrJErrorHandling CHAIN: 
->SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
3573 INFO  (TEST-TestSolrJErrorHandling.testWithBinary-seed#[5C3578C3D02417A3]) 
[] o.a.s.SolrTestCaseJ4 ###Ending testWithBinary
{code}

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453_test.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.3 - Build # 7 - Still Failing

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.3/7/

No tests ran.

Build Log:
[...truncated 53068 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist
 [copy] Copying 461 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.7
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (13.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.3.2-src.tgz...
   [smoker] 28.5 MB in 0.04 sec (725.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.2.tgz...
   [smoker] 65.7 MB in 0.08 sec (778.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.3.2.zip...
   [smoker] 75.9 MB in 0.10 sec (769.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.3.2.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.2.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] test demo with 1.8...
   [smoker]   got 6059 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.3.2-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   5.4.0
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/sm
   [smoker] okeTestRelease.py", line 1449, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1394, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1432, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, svnRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 583, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
svnRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 762, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.3/dev-tools/scripts/smokeTestRelease.py",
 line 1387, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 

[jira] [Commented] (SOLR-8475) Some refactoring to SolrIndexSearcher

2016-01-05 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083345#comment-15083345
 ] 

Shai Erera commented on SOLR-8475:
--

It looks fine, though I really think it's an over-kill :). Let's see if we get 
to a consensus on that issue on the dev list, and if not, I'll try your 
approach.

> Some refactoring to SolrIndexSearcher
> -
>
> Key: SOLR-8475
> URL: https://issues.apache.org/jira/browse/SOLR-8475
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8475.patch, SOLR-8475.patch, SOLR-8475.patch, 
> SOLR-8475.patch, SOLR-8475.patch
>
>
> While reviewing {{SolrIndexSearcher}}, I started to correct a thing here and 
> there, and eventually it led to these changes:
> * Moving {{QueryCommand}} and {{QueryResult}} to their own classes.
> * Moving FilterImpl into a private static class (was package-private and 
> defined in the same .java file, but separate class).
> * Some code formatting, imports organizing and minor log changes.
> * Removed fieldNames (handled the TODO in the code)
> * Got rid of usage of deprecated classes such as {{LegacyNumericUtils}} and 
> {{Legacy-*-Field}}.
> I wish we'd cut down the size of this file much more (it's 2500 lines now), 
> but I've decided to stop here so that the patch is manageable. I would like 
> to explore further refactorings afterwards, e.g. extracting cache management 
> code to an outer class (but keep {{SolrIndexSearcher}}'s API the same, if 
> possible).
> If you have additional ideas of more cleanups / simplifications, I'd be glad 
> to do them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8470) Make TTL of PKIAuthenticationPlugin's tokens configurable through a system property

2016-01-05 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083084#comment-15083084
 ] 

Noble Paul commented on SOLR-8470:
--

[~nirmalav] Thanks a lot

> Make TTL of PKIAuthenticationPlugin's tokens configurable through a system 
> property
> ---
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 15446 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15446/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic

Error Message:
Connection reset

Stack Trace:
java.net.SocketException: Connection reset
at 
__randomizedtesting.SeedInfo.seed([9C0609EC27F49304:37FC14F9F828152A]:0)
at java.net.SocketInputStream.read(SocketInputStream.java:209)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:139)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:155)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:284)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:261)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:165)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:167)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:272)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:124)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:271)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:184)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:88)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:110)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
at 
org.apache.lucene.replicator.http.HttpClientBase.executeGET(HttpClientBase.java:159)
at 
org.apache.lucene.replicator.http.HttpReplicator.checkForUpdate(HttpReplicator.java:51)
at 
org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:196)
at 
org.apache.lucene.replicator.ReplicationClient.updateNow(ReplicationClient.java:402)
at 
org.apache.lucene.replicator.http.HttpReplicatorTest.testBasic(HttpReplicatorTest.java:122)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 

[jira] [Closed] (SOLR-8435) Long update times Solr 5.3.1

2016-01-05 Thread Kenny Knecht (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kenny Knecht closed SOLR-8435.
--
Resolution: Fixed

This issue seemed to be caused by slower disks in our second setup, but the 
different behaviour between 5.2.1 and 5.3.1 led us to believe this was actually 
a bug. Sorry for bother you with it!

> Long update times Solr 5.3.1
> 
>
> Key: SOLR-8435
> URL: https://issues.apache.org/jira/browse/SOLR-8435
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 5.3.1
> Environment: Ubuntu server 128Gb
>Reporter: Kenny Knecht
> Fix For: 5.2.1
>
>
> We have 2 128GB ubuntu servers in solr cloud config. We update by curling 
> json files of 20,000 documents. In 5.2.1 this consistently takes between 19 
> and 24 seconds. In 5.3.1 most times this takes 20s but in about 20% of the 
> files this takes much longer: up to 500s! Which files seems to be quite 
> random. Is this a known bug? any workaround? fixed in 5.4?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083272#comment-15083272
 ] 

Mark Miller commented on SOLR-8453:
---

bq. On trunk right now, you have to drop that poll to 12ms or less on my 
machine to get the test to pass.

And on 5x, Jetty is not sensitive to the length of the poll it seems (at least 
up to 30 seconds).

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2016-01-05 Thread Nirmala Venkatraman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083072#comment-15083072
 ] 

Nirmala Venkatraman commented on SOLR-8422:
---

I applied Noble's patch for pkiauth.ttl(SOLR-8470) and set the ttl parameter to 
60sec(default is 5sec) and ran another batch of indexing load. Good news is 
that  I didn't hit any of the 401 exceptions , butone of the nodes 
sgdsolar7 went into recovery with zksession expiration in /overseer/elect. 

> Basic Authentication plugin is not working correctly in solrcloud
> -
>
> Key: SOLR-8422
> URL: https://issues.apache.org/jira/browse/SOLR-8422
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 5.3.1
> Environment: Solrcloud
>Reporter: Nirmala Venkatraman
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-8422.patch
>
>
> Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
> with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
> have 64 collections, each having 2 replicas distributed across the 5 servers 
> in the solr cloud. A sample screen shot of the collections/shard locations 
> shown below:-
> Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
> servers in the solrcloud and the request is sent to a server  which  doesn't 
> have the collection
> Here is the request sent by the indexing tool  to sgdsolar1, that includes 
> the correct BasicAuth credentials
> Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
> collection1, but no basic auth header is being passed. 
> As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
> all the way back to solr indexing tool
> 9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
> /solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4=unid,sequence,folderunid=xml=10
>  HTTP/1.1" 401 366
> 2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] 
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. 
> failed permission 
> org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
> 2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall 
> USER_REQUIRED auth header null context : userPrincipal: [null] type: [READ], 
> collections: [collection1,], Path: [/get] path : /get params 
> :fl=unid,sequence,folderunid=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4=10=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!
> Step 3 - In another solrcloud , if the indexing tool sends the solr get 
> request to the server that has the collection1, I see that basic 
> authentication working as expected.
> I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
> solr-core and solr-solrj jar files under the solr-webapp folder that were 
> provided via earlier patches that Anshum/Noble worked on:-
> SOLR-8167 fixes the POST issue 
> SOLR-8326  fixing PKIAuthenticationPlugin.
> SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8470) Make TTL of PKIAuthenticationPlugin's tokens configurable through a system property

2016-01-05 Thread Nirmala Venkatraman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083111#comment-15083111
 ] 

Nirmala Venkatraman commented on SOLR-8470:
---

After applying the ttl patch and setting to 60sec, one of the nodes hit this 
error. Most likely culprit is slightly longer GC pauses . Do you think we 
should set autoReplicaFailoverWorkLoopDelay to a greater # than default of 10sec

2016-01-04 23:05:37.205 ERROR 
(OverseerHdfsCoreFailoverThread-239245611805900804-sgdsolar7.swg.usma.ibm.com:8984_solr-n_000133)
 [   ] o.a.s.c.OverseerAutoReplicaFailoverThread 
OverseerAutoReplicaFailoverThread had an error in its thread work 
loop.:org.apache.solr.common.SolrException: Error reading cluster properties
at 
org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:732)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.doWork(OverseerAutoReplicaFailoverThread.java:152)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:131)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:502)
at org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1342)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1040)
at 
org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkClient.java:311)
at 
org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkClient.java:308)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at 
org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:308)
at 
org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:725)
... 3 more

2016-01-04 23:05:37.218 ERROR (OverseerExitThread) [   ] o.a.s.c.Overseer could 
not read the data
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /overseer_elect/leader
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.checkIfIamStillLeader(Overseer.java:300)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater.access$300(Overseer.java:87)
at 
org.apache.solr.cloud.Overseer$ClusterStateUpdater$2.run(Overseer.java:261)
2016-01-04 23:05:37.206 ERROR (qtp829053325-487) [c:collection33 s:shard1 
r:core_node2 x:collection33_shard1_replica1] o.a.s.c.SolrCore 
org.apache.solr.common.SolrException: Cannot talk to ZooKeeper - Updates are 
disabled.


> Make TTL of PKIAuthenticationPlugin's tokens configurable through a system 
> property
> ---
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8490) factor out a QueryCommand (super) class from SolrIndexSearcher.QueryCommand

2016-01-05 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-8490:
-

 Summary: factor out a QueryCommand (super) class from 
SolrIndexSearcher.QueryCommand
 Key: SOLR-8490
 URL: https://issues.apache.org/jira/browse/SOLR-8490
 Project: Solr
  Issue Type: Sub-task
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


part 0 (for trunk and branch_5x) - preparation:
 * two minor changes in {{QueryComponent.java}} and {{SolrIndexSearcher.java}} 
to simplify the subsequent actual changes

part 1 (for trunk and branch_5x) - factor out a {{QueryCommand}} (super) class 
from {{SolrIndexSearcher.QueryCommand}}:
* for back-compat reasons {{SolrIndexSearcher.QueryCommand}} inherits from the 
factored out class
* for private variables and methods use {{QueryCommand}} instead of 
{{SolrIndexSearcher.QueryCommand}}
* public methods and constructors taking {{SolrIndexSearcher.QueryCommand}} 
args marked @Deprecated and equivalents with {{QueryCommand}} arg created

part 2 (for trunk only) - remove deprecated {{SolrIndexSearcher.QueryCommand}} 
class:
* affected/changed public or protected methods:
** {{ResponseBuilder.getQueryCommand()}}
** {{SolrIndexSearcher.search(QueryResult qr, QueryCommand cmd)}}
** {{SolrIndexSearcher.sortDocSet(QueryResult qr, QueryCommand cmd)}}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8470) Make TTL of PKIAuthenticationPlugin's tokens configurable through a system property

2016-01-05 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083218#comment-15083218
 ] 

Noble Paul commented on SOLR-8470:
--

This is because ZK session time out . Maybe, you need to keep a higher timeout. 



> Make TTL of PKIAuthenticationPlugin's tokens configurable through a system 
> property
> ---
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8490) factor out a QueryCommand (super) class from SolrIndexSearcher.QueryCommand

2016-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8490:
--
Attachment: SOLR-8490-part2.patch
SOLR-8490-part1.patch
SOLR-8490-part0.patch

> factor out a QueryCommand (super) class from SolrIndexSearcher.QueryCommand
> ---
>
> Key: SOLR-8490
> URL: https://issues.apache.org/jira/browse/SOLR-8490
> Project: Solr
>  Issue Type: Sub-task
>  Components: search
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8490-part0.patch, SOLR-8490-part1.patch, 
> SOLR-8490-part2.patch
>
>
> part 0 (for trunk and branch_5x) - preparation:
>  * two minor changes in {{QueryComponent.java}} and 
> {{SolrIndexSearcher.java}} to simplify the subsequent actual changes
> part 1 (for trunk and branch_5x) - factor out a {{QueryCommand}} (super) 
> class from {{SolrIndexSearcher.QueryCommand}}:
> * for back-compat reasons {{SolrIndexSearcher.QueryCommand}} inherits from 
> the factored out class
> * for private variables and methods use {{QueryCommand}} instead of 
> {{SolrIndexSearcher.QueryCommand}}
> * public methods and constructors taking {{SolrIndexSearcher.QueryCommand}} 
> args marked @Deprecated and equivalents with {{QueryCommand}} arg created
> part 2 (for trunk only) - remove deprecated 
> {{SolrIndexSearcher.QueryCommand}} class:
> * affected/changed public or protected methods:
> ** {{ResponseBuilder.getQueryCommand()}}
> ** {{SolrIndexSearcher.search(QueryResult qr, QueryCommand cmd)}}
> ** {{SolrIndexSearcher.sortDocSet(QueryResult qr, QueryCommand cmd)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8475) Some refactoring to SolrIndexSearcher

2016-01-05 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083191#comment-15083191
 ] 

Christine Poerschke commented on SOLR-8475:
---

bq. If it's possible to leave deprecated inner classes extending the extracted 
classes, then existing user code should work just fine. I haven't attempted to 
do this, but I think that should work.

SOLR-8490 (created as a sub-task of this ticket) is my attempt at this for 
{{QueryCommand}} only. Having the deprecated inner class extend the extracted 
class of the same name was a little tricky but having an interim helper class 
seems to work though perhaps there is another more proper alternative to that 
also.
{code}
+++ b/solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java
...
-  public static class QueryCommand {
+  @Deprecated
+  public static class QueryCommand extends QueryCommandAdapter {

+++ b/solr/core/src/java/org/apache/solr/search/QueryCommandAdapter.java
...
+@Deprecated
+public class QueryCommandAdapter extends QueryCommand {

+++ b/solr/core/src/java/org/apache/solr/search/QueryCommand.java
...
+public class QueryCommand {
{code}

> Some refactoring to SolrIndexSearcher
> -
>
> Key: SOLR-8475
> URL: https://issues.apache.org/jira/browse/SOLR-8475
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8475.patch, SOLR-8475.patch, SOLR-8475.patch, 
> SOLR-8475.patch, SOLR-8475.patch
>
>
> While reviewing {{SolrIndexSearcher}}, I started to correct a thing here and 
> there, and eventually it led to these changes:
> * Moving {{QueryCommand}} and {{QueryResult}} to their own classes.
> * Moving FilterImpl into a private static class (was package-private and 
> defined in the same .java file, but separate class).
> * Some code formatting, imports organizing and minor log changes.
> * Removed fieldNames (handled the TODO in the code)
> * Got rid of usage of deprecated classes such as {{LegacyNumericUtils}} and 
> {{Legacy-*-Field}}.
> I wish we'd cut down the size of this file much more (it's 2500 lines now), 
> but I've decided to stop here so that the patch is manageable. I would like 
> to explore further refactorings afterwards, e.g. extracting cache management 
> code to an outer class (but keep {{SolrIndexSearcher}}'s API the same, if 
> possible).
> If you have additional ideas of more cleanups / simplifications, I'd be glad 
> to do them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083159#comment-15083159
 ] 

Mark Miller commented on SOLR-8453:
---

bq. If you remove the 250ms poll that happens ... with the client kind of 
racing the server.

On trunk right now, you have to drop that poll to 12ms or less on my machine to 
get the test to pass.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8470) Make TTL of PKIAuthenticationPlugin's tokens configurable through a system property

2016-01-05 Thread Nirmala Venkatraman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083075#comment-15083075
 ] 

Nirmala Venkatraman commented on SOLR-8470:
---

I applied Noble's patch for pkiauth.ttl(SOLR-8470) and set the ttl parameter to 
60sec(default is 5sec) and ran another batch of indexing load. Good news is 
that  I didn't hit any of the 401 exceptions as seen in SOLR-8422 , butone 
of the nodes sgdsolar7 went into recovery with zksession expiration in 
/overseer/elect. 
So I think this is a good fix for 5.3.2

> Make TTL of PKIAuthenticationPlugin's tokens configurable through a system 
> property
> ---
>
> Key: SOLR-8470
> URL: https://issues.apache.org/jira/browse/SOLR-8470
> Project: Solr
>  Issue Type: Improvement
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 5.3.2, 5.5, Trunk
>
> Attachments: SOLR-8470.patch
>
>
> Currently the PKIAuthenticationPlugin has hardcoded the ttl to 5000ms. There 
> are users who have experienced timeouts. Make this configurable



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 312 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/312/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:49463/ji/cz","node_name":"127.0.0.1:49463_ji%2Fcz","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={   "replicationFactor":"3",   
"shards":{"shard1":{   "range":"8000-7fff",   "state":"active", 
  "replicas":{ "core_node1":{   "state":"down",   
"base_url":"http://127.0.0.1:60198/ji/cz;,   
"core":"c8n_1x3_lf_shard1_replica2",   
"node_name":"127.0.0.1:60198_ji%2Fcz"}, "core_node2":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:49463/ji/cz;,   
"node_name":"127.0.0.1:49463_ji%2Fcz",   "state":"active",   
"leader":"true"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica3",   
"base_url":"http://127.0.0.1:53584/ji/cz;,   
"node_name":"127.0.0.1:53584_ji%2Fcz",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node2:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:49463/ji/cz","node_name":"127.0.0.1:49463_ji%2Fcz","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "state":"down",
  "base_url":"http://127.0.0.1:60198/ji/cz;,
  "core":"c8n_1x3_lf_shard1_replica2",
  "node_name":"127.0.0.1:60198_ji%2Fcz"},
"core_node2":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:49463/ji/cz;,
  "node_name":"127.0.0.1:49463_ji%2Fcz",
  "state":"active",
  "leader":"true"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica3",
  "base_url":"http://127.0.0.1:53584/ji/cz;,
  "node_name":"127.0.0.1:53584_ji%2Fcz",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([3EB058554FF8D2FF:B6E4678FE104BF07]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:171)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 

[jira] [Updated] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-8453:
---
Attachment: SOLR-8453_test.patch

Here's an updated test that dedups the exception summary and cranks up the 
number of client threads, just to see what type of errors we can get.

{code}
10714 ERROR (TEST-TestSolrJErrorHandling.testWithXml-seed#[CDCE136AF9E0FF01]) [ 
   ] o.a.s.c.s.TestSolrJErrorHandling EXCEPTION LIST:
98) 
SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Broken
 pipe)
2) 
SolrServerException->ClientProtocolException->NonRepeatableRequestException->SocketException(Protocol
 wrong type for socket)
{code}

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453_test.patch, SOLR-8453_test.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8312) Add doc set size and number of buckets metrics

2016-01-05 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-8312:
--
Attachment: SOLR-8312.patch

> Add doc set size and number of buckets metrics
> --
>
> Key: SOLR-8312
> URL: https://issues.apache.org/jira/browse/SOLR-8312
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8312.patch
>
>
> The doc set size and number of buckets represents the input data size and 
> intermediate data size for each step of facet. Therefore they are useful 
> metrics to be included in telemetry. 
> The output data size is usually defined by user and not too large. Therefore 
> the output data set size is not included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 903 - Still Failing

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/903/

No tests ran.

Build Log:
[...truncated 2230 lines...]
ERROR: Connection was broken: java.io.IOException: Unexpected termination of 
the channel
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:50)
Caused by: java.io.EOFException
at 
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2325)
at 
java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:2794)
at 
java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:801)
at java.io.ObjectInputStream.(ObjectInputStream.java:299)
at 
hudson.remoting.ObjectInputStreamEx.(ObjectInputStreamEx.java:40)
at 
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:48)

Build step 'Invoke Ant' marked build as failure
ERROR: Publisher 'Archive the artifacts' failed: no workspace for 
Lucene-Solr-NightlyTests-trunk #903
ERROR: Publisher 'Publish JUnit test result report' failed: no workspace for 
Lucene-Solr-NightlyTests-trunk #903
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
ERROR: lucene is offline; cannot locate latest1.8
ERROR: lucene is offline; cannot locate latest1.8



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083534#comment-15083534
 ] 

Yonik Seeley commented on SOLR-8453:


This test currently passes on Solr 5x.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453_test.patch, SOLR-8453_test.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8312) Add doc set size and number of buckets metrics

2016-01-05 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083527#comment-15083527
 ] 

Michael Sun commented on SOLR-8312:
---

Here is the patch. It adds two metrics to facet telemetry:
1. inputDocSetSize: the size of input doc set for each sub-facet.
2. numBuckets: number of unique buckets. It is the same number to the 
numBuckets in facet query result if numBuckets param is set to true in query. 
and is for field facet only. The reason to dup in facet telemetry is 
* query user may not turn on numBuckets but the operation and monitoring team 
still want to view numBucket information.
* operation and monitoring team may not be allowed to view query result.

> Add doc set size and number of buckets metrics
> --
>
> Key: SOLR-8312
> URL: https://issues.apache.org/jira/browse/SOLR-8312
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8312.patch
>
>
> The doc set size and number of buckets represents the input data size and 
> intermediate data size for each step of facet. Therefore they are useful 
> metrics to be included in telemetry. 
> The output data size is usually defined by user and not too large. Therefore 
> the output data set size is not included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_66) - Build # 15448 - Still Failing!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15448/
Java: 32bit/jdk1.8.0_66 -client -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.util.TestSolrCLIRunExample

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor, TransactionLog]
at __randomizedtesting.SeedInfo.seed([F2A639CF0F0D0A1F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:229)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.util.TestSolrCLIRunExample

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.util.TestSolrCLIRunExample: 
1) Thread[id=2847, name=searcherExecutor-932-thread-1, state=WAITING, 
group=TGRP-TestSolrCLIRunExample] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.util.TestSolrCLIRunExample: 
   1) Thread[id=2847, name=searcherExecutor-932-thread-1, state=WAITING, 
group=TGRP-TestSolrCLIRunExample]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at 

[jira] [Commented] (SOLR-8312) Add doc set size and number of buckets metrics

2016-01-05 Thread Michael Sun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083525#comment-15083525
 ] 

Michael Sun commented on SOLR-8312:
---

Here is the patch. It adds two metrics to facet telemetry:
1. inputDocSetSize: the size of input doc set for each sub-facet.
2. numBuckets: number of unique buckets. It is the same number to the 
numBuckets in facet query result if numBuckets param is set to true in query. 
and is for field facet only. The reason to dup in facet telemetry is 
* query user may not turn on numBuckets but the operation and monitoring team 
still want to view numBucket information.
* operation and monitoring team may not be allowed to view query result.



> Add doc set size and number of buckets metrics
> --
>
> Key: SOLR-8312
> URL: https://issues.apache.org/jira/browse/SOLR-8312
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8312.patch
>
>
> The doc set size and number of buckets represents the input data size and 
> intermediate data size for each step of facet. Therefore they are useful 
> metrics to be included in telemetry. 
> The output data size is usually defined by user and not too large. Therefore 
> the output data set size is not included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-8312) Add doc set size and number of buckets metrics

2016-01-05 Thread Michael Sun (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8312?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Sun updated SOLR-8312:
--
Comment: was deleted

(was: Here is the patch. It adds two metrics to facet telemetry:
1. inputDocSetSize: the size of input doc set for each sub-facet.
2. numBuckets: number of unique buckets. It is the same number to the 
numBuckets in facet query result if numBuckets param is set to true in query. 
and is for field facet only. The reason to dup in facet telemetry is 
* query user may not turn on numBuckets but the operation and monitoring team 
still want to view numBucket information.
* operation and monitoring team may not be allowed to view query result.)

> Add doc set size and number of buckets metrics
> --
>
> Key: SOLR-8312
> URL: https://issues.apache.org/jira/browse/SOLR-8312
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8312.patch
>
>
> The doc set size and number of buckets represents the input data size and 
> intermediate data size for each step of facet. Therefore they are useful 
> metrics to be included in telemetry. 
> The output data size is usually defined by user and not too large. Therefore 
> the output data set size is not included.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-839) XML Query Parser support

2016-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-839:
-
Affects Version/s: Trunk

> XML Query Parser support
> 
>
> Key: SOLR-839
> URL: https://issues.apache.org/jira/browse/SOLR-839
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Affects Versions: 1.3, Trunk
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-839-object-parser.patch, SOLR-839.patch, 
> SOLR-839.patch, lucene-xml-query-parser-2.4-dev.jar
>
>
> Lucene contrib includes a query parser that is able to create the 
> full-spectrum of Lucene queries, using an XML data structure.
> This patch adds "xml" query parser support to Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-839) XML Query Parser support

2016-01-05 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083664#comment-15083664
 ] 

Christine Poerschke commented on SOLR-839:
--

[~ehatcher] - if you have no objections then I will re-assign this ticket to 
myself with a view towards committing it later this month, to trunk and 
branch_5x.

Everyone - the latest patch builds on previous patches and code blocks in this 
ticket (patch summary above), reviews, comments, suggestions etc. welcome. 
Thank you.

> XML Query Parser support
> 
>
> Key: SOLR-839
> URL: https://issues.apache.org/jira/browse/SOLR-839
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Affects Versions: 1.3
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-839-object-parser.patch, SOLR-839.patch, 
> SOLR-839.patch, lucene-xml-query-parser-2.4-dev.jar
>
>
> Lucene contrib includes a query parser that is able to create the 
> full-spectrum of Lucene queries, using an XML data structure.
> This patch adds "xml" query parser support to Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8477) Let users choose compression mode in SchemaCodecFactory

2016-01-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-8477:

Attachment: SOLR-8477.patch

Some more tests and docs

> Let users choose compression mode in SchemaCodecFactory
> ---
>
> Key: SOLR-8477
> URL: https://issues.apache.org/jira/browse/SOLR-8477
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-8477.patch, SOLR-8477.patch
>
>
> Expose Lucene's compression mode (LUCENE-5914) via SchemaCodecFactory init 
> argument. By default use current default mode: Mode.BEST_SPEED.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3909 - Still Failing

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3909/

4 tests failed.
FAILED:  org.apache.solr.search.stats.TestDistribIDF.testMultiCollectionQuery

Error Message:
Could not create collection. 
Response{responseHeader={status=0,QTime=90423},success={null={responseHeader={status=0,QTime=1450},core=collection1_local_shard2_replica1}},failure={null=org.apache.solr.client.solrj.SolrServerException:Timeout
 occured while waiting response from server at: https://127.0.0.1:46969/solr}}

Stack Trace:
java.lang.AssertionError: Could not create collection. 
Response{responseHeader={status=0,QTime=90423},success={null={responseHeader={status=0,QTime=1450},core=collection1_local_shard2_replica1}},failure={null=org.apache.solr.client.solrj.SolrServerException:Timeout
 occured while waiting response from server at: https://127.0.0.1:46969/solr}}
at 
__randomizedtesting.SeedInfo.seed([359EE2EE6DD218AF:24ED25DFB1A41DD3]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.search.stats.TestDistribIDF.createCollection(TestDistribIDF.java:215)
at 
org.apache.solr.search.stats.TestDistribIDF.createCollection(TestDistribIDF.java:190)
at 
org.apache.solr.search.stats.TestDistribIDF.testMultiCollectionQuery(TestDistribIDF.java:157)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (SOLR-4327) SolrJ code review indicates potential for leaked HttpClient connections

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15084569#comment-15084569
 ] 

Mark Miller commented on SOLR-4327:
---

My mistake for not taking the time to really dig into this one. This was a 
mistake to add, though it had no ill affect. I've addressed it in SOLR-8451 and 
added some connection reuse testing.

> SolrJ code review indicates potential for leaked HttpClient connections
> ---
>
> Key: SOLR-4327
> URL: https://issues.apache.org/jira/browse/SOLR-4327
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.0
>Reporter: Karl Wright
>Assignee: Mark Miller
> Fix For: 4.5.1, 4.6, Trunk
>
> Attachments: SOLR-4327.patch, SOLR-4327.patch
>
>
> The SolrJ HttpSolrServer implementation does not seem to handle errors 
> properly and seems capable of leaking HttpClient connections.  See the 
> request() method in org.apache.solr.client.solrj.impl.HttpSolrServer.  The 
> issue is that exceptions thrown from within this method do not necessarily 
> consume the stream when an exception is thrown.  There is a try/finally block 
> which reads (in part):
> {code}
> } finally {
>   if (respBody != null && processor!=null) {
> try {
>   respBody.close();
> } catch (Throwable t) {} // ignore
>   }
> }
> {code}
> But, in order to always guarantee consumption of the stream, it should 
> include:
> {code}
> method.abort();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Solaris (64bit/jdk1.8.0) - Build # 310 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/310/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest

Error Message:
There are still nodes recoverying - waited for 10 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 10 
seconds
at 
__randomizedtesting.SeedInfo.seed([9470A0F0FCD46C10:DAD3D523ED0F7D00]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:175)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:837)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1393)
at 
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest(TestAuthorizationFramework.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15453 - Still Failing!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15453/
Java: 64bit/jdk-9-ea+95 -XX:-UseCompressedOops -XX:+UseSerialGC 
-XX:-CompactStrings

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   "collection1":{ "replicationFactor":"1", "shards":{   
"shard1":{ "range":"8000-", "state":"active",   
  "replicas":{"core_node2":{ "core":"collection1", 
"base_url":"http://127.0.0.1:44422/ccb;, 
"node_name":"127.0.0.1:44422_ccb", "state":"active", 
"leader":"true"}}},   "shard2":{ "range":"0-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:57216/ccb;,  
   "node_name":"127.0.0.1:57216_ccb", "state":"active", 
"leader":"true"},   "core_node3":{ 
"core":"collection1", "base_url":"http://127.0.0.1:56673/ccb;,  
   "node_name":"127.0.0.1:56673_ccb", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "autoCreated":"true"},   "control_collection":{  
   "replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", 
"replicas":{"core_node1":{ "core":"collection1", 
"base_url":"http://127.0.0.1:44293/ccb;, 
"node_name":"127.0.0.1:44293_ccb", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "c8n_1x2":{ "replicationFactor":"2", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"c8n_1x2_shard1_replica2", 
"base_url":"http://127.0.0.1:56673/ccb;, 
"node_name":"127.0.0.1:56673_ccb", "state":"recovering"},   
"core_node2":{ "core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:57216/ccb;, 
"node_name":"127.0.0.1:57216_ccb", "state":"active", 
"leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"},   "collMinRf_1x3":{ 
"replicationFactor":"3", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", "replicas":{ 
  "core_node1":{ "core":"collMinRf_1x3_shard1_replica3",
 "base_url":"http://127.0.0.1:44293/ccb;, 
"node_name":"127.0.0.1:44293_ccb", "state":"active"},   
"core_node2":{ "core":"collMinRf_1x3_shard1_replica2", 
"base_url":"http://127.0.0.1:57216/ccb;, 
"node_name":"127.0.0.1:57216_ccb", "state":"active", 
"leader":"true"},   "core_node3":{ 
"core":"collMinRf_1x3_shard1_replica1", 
"base_url":"http://127.0.0.1:44422/ccb;, 
"node_name":"127.0.0.1:44422_ccb", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  "collection1":{
"replicationFactor":"1",
"shards":{
  "shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node2":{
"core":"collection1",
"base_url":"http://127.0.0.1:44422/ccb;,
"node_name":"127.0.0.1:44422_ccb",
"state":"active",
"leader":"true"}}},
  "shard2":{
"range":"0-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:57216/ccb;,
"node_name":"127.0.0.1:57216_ccb",
"state":"active",
"leader":"true"},
  "core_node3":{
"core":"collection1",
"base_url":"http://127.0.0.1:56673/ccb;,
"node_name":"127.0.0.1:56673_ccb",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  "control_collection":{
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:44293/ccb;,
"node_name":"127.0.0.1:44293_ccb",
"state":"active",
"leader":"true",

[jira] [Commented] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085146#comment-15085146
 ] 

Cao Manh Dat commented on SOLR-8492:


What a wonderful patch. I'm very excited on implementing ml algorithms by using 
streaming.

A couple of comments for this patch:
{code}
//wi = alpha(outcome - sigmoid)*wi + xi
double sig = sigmoid(sum(multiply(vals, weights)));
error = outcome - sig;

workingWeights = sum(vals, multiply(error * alpha, weights));

for(int i=0; i Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Attachments: SOLR-8492.patch
>
>
> This ticket is to add a new query called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to the shards and executes the LogisticRegressionQuery. The model data 
> is collected from the shards and the weights are averaged and sent back to 
> the shards with the next iteration. Each call to read() returns a Tuple with 
> the averaged weights and error from the shards. With this approach the 
> LogitStream streams the changing model back to the client after each 
> iteration.
> The LogitStream will return the EOF Tuple when it reaches the defined 
> maxIterations. When sent as a Streaming Expression to the Stream handler this 
> provides parallel iterative behavior. This same approach can be used to 
> implement other parallel iterative algorithms.
> The initial patch has  a test which simply tests the mechanics of the 
> iteration. More work will need to be done to ensure the SGD is properly 
> implemented. The distributed approach of the SGD will also need to be 
> reviewed.  
> This implementation is designed for use cases with a small number of features 
> because each feature is it's own discreet field.
> An implementation which supports a higher number of features would be 
> possible by packing features into a byte array and storing as binary 
> DocValues.
> This implementation is designed to support a large sample set. With a large 
> number of shards, a sample set into the billions may be possible.
> sample Streaming Expression Syntax:
> {code}
> logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Breaking Java back-compat in Solr

2016-01-05 Thread Anshum Gupta
As I understand, seems like there's reasonable consensus that we will:

1. provide strong back-compat for for SolrJ and REST APIs
2. Strive to maintain but not guarantee *strong* back-compat for Java APIs.

Please correct me if I'm wrong.


On Mon, Jan 4, 2016 at 9:57 PM, Anshum Gupta  wrote:

> Hi,
>
> I was looking at refactoring code in Solr and it gets really tricky and
> confusing in terms of what level of back-compat needs to be maintained.
> Ideally, we should only maintain back-compat at the REST API level. We may
> annotate a few really important Java APIs where we're guarantee back-compat
> across minor versions, but we shouldn't certainly be doing that across the
> board.
>
> Thoughts?
>
> P.S: I hope this doesn't spin-off into something I fear :)
>
> --
> Anshum Gupta
>



-- 
Anshum Gupta


[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1067 - Still Failing

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1067/

All tests passed

Build Log:
[...truncated 10091 lines...]
[javac] Compiling 613 source files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build/solr-core/classes/test
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:103:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent(CoreDescriptor.CORE_CONFIG, 
"solrconfig-tlog.xml");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:104:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.maxBufferedDocs", 
"10");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:105:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.ramBufferSizeMB", 
"100");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:107:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergePolicy", 
"org.apache.lucene.index.TieredMergePolicy");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:108:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergeScheduler", 
"org.apache.lucene.index.ConcurrentMergeScheduler");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:109:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.directoryFactory", 
(persistIndex ? "solr.StandardDirectoryFactory" : "solr.RAMDirectoryFactory"));
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 errors

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:801: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:738: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/build.xml:59: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/build.xml:233:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/solr/common-build.xml:526:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:808:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:822:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/common-build.xml:1956:
 Compile failed; see the compiler error output for details.

Total time: 133 minutes 9 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Comment Edited] (SOLR-8493) SolrHadoopAuthenticationFilter.getZkChroot: java.lang.StringIndexOutOfBoundsException: String index out of range: -1

2016-01-05 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085140#comment-15085140
 ] 

Ishan Chattopadhyaya edited comment on SOLR-8493 at 1/6/16 7:37 AM:


SolrHadoopAuthenticationFilter is custom code introduced in CDH's version of 
Solr. Can you please check with Cloudera's support? (It is roughly equivalent 
to Solr's KerberosFilter in Solr.)


was (Author: ichattopadhyaya):
SolrHadoopAuthenticationFilter is custom code introduced in CDH's version of 
Solr. Can you please check with Cloudera's support?

> SolrHadoopAuthenticationFilter.getZkChroot: 
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> 
>
> Key: SOLR-8493
> URL: https://issues.apache.org/jira/browse/SOLR-8493
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 4.10.3
>Reporter: zuotingbing
>
> [error info]
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> at java.lang.String.substring(String.java:1904)
> at 
> org.apache.solr.servlet.SolrHadoopAuthenticationFilter.getZkChroot(SolrHadoopAuthenticationFilter.java:147)
> [source code]:
> SolrHadoopAuthenticationFilter.java
>   private String getZkChroot() {
> String zkHost = System.getProperty("zkHost");
> return zkHost != null?
>   zkHost.substring(zkHost.indexOf("/"), zkHost.length()) : "/solr";
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8493) SolrHadoopAuthenticationFilter.getZkChroot: java.lang.StringIndexOutOfBoundsException: String index out of range: -1

2016-01-05 Thread zuotingbing (JIRA)
zuotingbing created SOLR-8493:
-

 Summary: SolrHadoopAuthenticationFilter.getZkChroot: 
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
 Key: SOLR-8493
 URL: https://issues.apache.org/jira/browse/SOLR-8493
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.10.3
Reporter: zuotingbing


[error info]
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
at java.lang.String.substring(String.java:1904)
at 
org.apache.solr.servlet.SolrHadoopAuthenticationFilter.getZkChroot(SolrHadoopAuthenticationFilter.java:147)

[source code]:
SolrHadoopAuthenticationFilter.java

  private String getZkChroot() {
String zkHost = System.getProperty("zkHost");
return zkHost != null?
  zkHost.substring(zkHost.indexOf("/"), zkHost.length()) : "/solr";
  }




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085065#comment-15085065
 ] 

Mark Miller edited comment on SOLR-8453 at 1/6/16 6:45 AM:
---

Okay, I'm starting to think it's a change in consumeAll in - 
https://github.com/eclipse/jetty.project/blame/jetty-9.3.x/jetty-server/src/main/java/org/eclipse/jetty/server/HttpInput.java#L443

I think perhaps that is now returning false in 
https://github.com/eclipse/jetty.project/blob/jetty-9.3.x/jetty-server/src/main/java/org/eclipse/jetty/server/HttpConnection.java#L403

It's looking to me like the server resets the connection because of unconsumed 
content, and previously it must have been properly consuming the extra.


was (Author: markrmil...@gmail.com):
Okay, I'm starting to think it's a change in consumeAll in - 
https://github.com/eclipse/jetty.project/blame/jetty-9.3.x/jetty-server/src/main/java/org/eclipse/jetty/server/HttpInput.java

I think perhaps that is now returning false in 
https://github.com/eclipse/jetty.project/blob/jetty-9.3.x/jetty-server/src/main/java/org/eclipse/jetty/server/HttpConnection.java

It's looking to me like the server resets the connection because of unconsumed 
content, and previously it must have been properly consuming the extra.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453_test.patch, SOLR-8453_test.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_66) - Build # 15454 - Still Failing!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15454/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 3 ms! 
ClusterState: {   "collection1":{ "replicationFactor":"1", "shards":{   
"shard1":{ "range":"8000-", "state":"active",   
  "replicas":{"core_node2":{ "core":"collection1", 
"base_url":"http://127.0.0.1:53250/rk/yy;, 
"node_name":"127.0.0.1:53250_rk%2Fyy", "state":"active",
 "leader":"true"}}},   "shard2":{ "range":"0-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:38418/rk/yy;,
 "node_name":"127.0.0.1:38418_rk%2Fyy", "state":"active",   
  "leader":"true"},   "core_node3":{ 
"core":"collection1", "base_url":"http://127.0.0.1:56474/rk/yy;,
 "node_name":"127.0.0.1:56474_rk%2Fyy", 
"state":"active", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "control_collection":{ "replicationFactor":"1",
 "shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{"core_node1":{ 
"core":"collection1", "base_url":"http://127.0.0.1:57012/rk/yy;,
 "node_name":"127.0.0.1:57012_rk%2Fyy", "state":"active",   
  "leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "c8n_1x2":{ "replicationFactor":"2", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:56474/rk/yy;, 
"node_name":"127.0.0.1:56474_rk%2Fyy", "state":"recovering"},   
"core_node2":{ "core":"c8n_1x2_shard1_replica2", 
"base_url":"http://127.0.0.1:38418/rk/yy;, 
"node_name":"127.0.0.1:38418_rk%2Fyy", "state":"active",
 "leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"},   "collMinRf_1x3":{ 
"replicationFactor":"3", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", "replicas":{ 
  "core_node1":{ "core":"collMinRf_1x3_shard1_replica3",
 "base_url":"http://127.0.0.1:57012/rk/yy;, 
"node_name":"127.0.0.1:57012_rk%2Fyy", "state":"active"},   
"core_node2":{ "core":"collMinRf_1x3_shard1_replica2", 
"base_url":"http://127.0.0.1:56474/rk/yy;, 
"node_name":"127.0.0.1:56474_rk%2Fyy", "state":"active"},   
"core_node3":{ "core":"collMinRf_1x3_shard1_replica1", 
"base_url":"http://127.0.0.1:38418/rk/yy;, 
"node_name":"127.0.0.1:38418_rk%2Fyy", "state":"active",
 "leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 3 ms! ClusterState: {
  "collection1":{
"replicationFactor":"1",
"shards":{
  "shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node2":{
"core":"collection1",
"base_url":"http://127.0.0.1:53250/rk/yy;,
"node_name":"127.0.0.1:53250_rk%2Fyy",
"state":"active",
"leader":"true"}}},
  "shard2":{
"range":"0-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:38418/rk/yy;,
"node_name":"127.0.0.1:38418_rk%2Fyy",
"state":"active",
"leader":"true"},
  "core_node3":{
"core":"collection1",
"base_url":"http://127.0.0.1:56474/rk/yy;,
"node_name":"127.0.0.1:56474_rk%2Fyy",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false",
"autoCreated":"true"},
  "control_collection":{
"replicationFactor":"1",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{"core_node1":{
"core":"collection1",
"base_url":"http://127.0.0.1:57012/rk/yy;,
"node_name":"127.0.0.1:57012_rk%2Fyy",

[jira] [Updated] (SOLR-8450) Internal HttpClient used in SolrJ is retries requests by default

2016-01-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8450:
--
Attachment: SOLR-8450.patch

> Internal HttpClient used in SolrJ is retries requests by default
> 
>
> Key: SOLR-8450
> URL: https://issues.apache.org/jira/browse/SOLR-8450
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Reporter: Shalin Shekhar Mangar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8450.patch, SOLR-8450.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8453) Local exceptions in DistributedUpdateProcessor should not cut off an ongoing request.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085065#comment-15085065
 ] 

Mark Miller commented on SOLR-8453:
---

Okay, I'm starting to think it's a change in consumeAll in - 
https://github.com/eclipse/jetty.project/blame/jetty-9.3.x/jetty-server/src/main/java/org/eclipse/jetty/server/HttpInput.java

I think perhaps that is now returning false in 
https://github.com/eclipse/jetty.project/blob/jetty-9.3.x/jetty-server/src/main/java/org/eclipse/jetty/server/HttpConnection.java

It's looking to me like the server resets the connection because of unconsumed 
content, and previously it must have been properly consuming the extra.

> Local exceptions in DistributedUpdateProcessor should not cut off an ongoing 
> request.
> -
>
> Key: SOLR-8453
> URL: https://issues.apache.org/jira/browse/SOLR-8453
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, SOLR-8453.patch, 
> SOLR-8453_test.patch, SOLR-8453_test.patch
>
>
> The basic problem is that when we are streaming in updates via a client, an 
> update can fail in a way that further updates in the request will not be 
> processed, but not in a way that causes the client to stop and finish up the 
> request before the server does something else with that connection.
> This seems to mean that even after the server stops processing the request, 
> the concurrent update client is still in the process of sending the request. 
> It seems previously, Jetty would not go after the connection very quickly 
> after the server processing thread was stopped via exception, and the client 
> (usually?) had time to clean up properly. But after the Jetty upgrade from 
> 9.2 to 9.3, Jetty closes the connection on the server sooner than previous 
> versions (?), and the client does not end up getting notified of the original 
> exception at all and instead hits a connection reset exception. The result 
> was random fails due to connection reset throughout our tests and one 
> particular test failing consistently. Even before this update, it does not 
> seem like we are acting in a safe or 'behaved' manner, but our version of 
> Jetty was relaxed enough (or a bug was fixed?) for our tests to work out.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_66) - Build # 15452 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15452/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.search.AnalyticsMergeStrategyTest.test

Error Message:
Error from server at http://127.0.0.1:45334//collection1: 
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
Scheme 'http' not registered.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:45334//collection1: 
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: 
Scheme 'http' not registered.
at 
__randomizedtesting.SeedInfo.seed([4C93A2BA95E86402:C4C79D603B1409FA]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:575)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:562)
at 
org.apache.solr.search.AnalyticsMergeStrategyTest.test(AnalyticsMergeStrategyTest.java:85)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.7.0_80) - Build # 15157 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/15157/
Java: 64bit/jdk1.7.0_80 -XX:-UseCompressedOops -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 9752 lines...]
[javac] Compiling 613 source files to 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/classes/test
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:103:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent(CoreDescriptor.CORE_CONFIG, 
"solrconfig-tlog.xml");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:104:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.maxBufferedDocs", 
"10");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:105:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.ramBufferSizeMB", 
"100");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:107:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergePolicy", 
"org.apache.lucene.index.TieredMergePolicy");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:108:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergeScheduler", 
"org.apache.lucene.index.ConcurrentMergeScheduler");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:109:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.directoryFactory", 
(persistIndex ? "solr.StandardDirectoryFactory" : "solr.RAMDirectoryFactory"));
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 errors

BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:794: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:738: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/build.xml:59: The following error 
occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build.xml:233: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/common-build.xml:526: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:808: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:822: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-5.x-Linux/lucene/common-build.xml:1956: 
Compile failed; see the compiler error output for details.

Total time: 22 minutes 39 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 3910 - Still Failing

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/3910/

All tests passed

Build Log:
[...truncated 9622 lines...]
[javac] Compiling 613 source files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build/solr-core/classes/test
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:103:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent(CoreDescriptor.CORE_CONFIG, 
"solrconfig-tlog.xml");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:104:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.maxBufferedDocs", 
"10");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:105:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.ramBufferSizeMB", 
"100");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:107:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergePolicy", 
"org.apache.lucene.index.TieredMergePolicy");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:108:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergeScheduler", 
"org.apache.lucene.index.ConcurrentMergeScheduler");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:109:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.directoryFactory", 
(persistIndex ? "solr.StandardDirectoryFactory" : "solr.RAMDirectoryFactory"));
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 errors

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:794: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:738: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/build.xml:59: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/build.xml:233:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/solr/common-build.xml:526:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:808:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:822:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-5.x-Java7/lucene/common-build.xml:1956:
 Compile failed; see the compiler error output for details.

Total time: 26 minutes 43 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8493) SolrHadoopAuthenticationFilter.getZkChroot: java.lang.StringIndexOutOfBoundsException: String index out of range: -1

2016-01-05 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085140#comment-15085140
 ] 

Ishan Chattopadhyaya commented on SOLR-8493:


SolrHadoopAuthenticationFilter is custom code introduced in CDH's version of 
Solr. Can you please check with Cloudera's support?

> SolrHadoopAuthenticationFilter.getZkChroot: 
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> 
>
> Key: SOLR-8493
> URL: https://issues.apache.org/jira/browse/SOLR-8493
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java
>Affects Versions: 4.10.3
>Reporter: zuotingbing
>
> [error info]
> java.lang.StringIndexOutOfBoundsException: String index out of range: -1
> at java.lang.String.substring(String.java:1904)
> at 
> org.apache.solr.servlet.SolrHadoopAuthenticationFilter.getZkChroot(SolrHadoopAuthenticationFilter.java:147)
> [source code]:
> SolrHadoopAuthenticationFilter.java
>   private String getZkChroot() {
> String zkHost = System.getProperty("zkHost");
> return zkHost != null?
>   zkHost.substring(zkHost.indexOf("/"), zkHost.length()) : "/solr";
>   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_66) - Build # 5395 - Still Failing!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5395/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MDCAwareThreadPoolExecutor, MockDirectoryWrapper, SolrCore, 
MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MDCAwareThreadPoolExecutor, MockDirectoryWrapper, SolrCore, 
MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([8F0DD6D4BB2873CB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=12176, name=searcherExecutor-5625-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=12176, name=searcherExecutor-5625-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at 

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2995 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2995/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at https://127.0.0.1:50628/s_ze/f/awholynewcollection_0: non 
ok status: 500, message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:50628/s_ze/f/awholynewcollection_0: non ok 
status: 500, message:Server Error
at 
__randomizedtesting.SeedInfo.seed([F21576DF506C287:877568B75BFAAF7F]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:509)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForNon403or404or503(AbstractFullDistribZkTestBase.java:1754)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:658)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:160)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[jira] [Assigned] (SOLR-5209) last replica removal cascades to remove shard from clusterstate

2016-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke reassigned SOLR-5209:
-

Assignee: Christine Poerschke  (was: Mark Miller)

> last replica removal cascades to remove shard from clusterstate
> ---
>
> Key: SOLR-5209
> URL: https://issues.apache.org/jira/browse/SOLR-5209
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.4
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Blocker
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-5209.patch, SOLR-5209.patch
>
>
> The problem we saw was that unloading of an only replica of a shard deleted 
> that shard's info from the clusterstate. Once it was gone then there was no 
> easy way to re-create the shard (other than dropping and re-creating the 
> whole collection's state).
> This seems like a bug?
> Overseer.java around line 600 has a comment and commented out code:
> // TODO TODO TODO!!! if there are no replicas left for the slice, and the 
> slice has no hash range, remove it
> // if (newReplicas.size() == 0 && slice.getRange() == null) {
> // if there are no replicas left for the slice remove it



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



SOLR-5209 in time for 6.0.0 release?

2016-01-05 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Hello Folks,

Would anyone have a little time to review and comment on the latest
  https://issues.apache.org/jira/browse/SOLR-5209
patch which perhaps went simply unnoticed towards the end of 2015?

Thanks,

Christine

[jira] [Issue Comment Deleted] (SOLR-8459) NPE using TermVectorComponent in combinition with ExactStatsCache

2016-01-05 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-8459:
---
Comment: was deleted

(was: Not yet. I'm waiting for committers to commit this patch to trunk.)

> NPE using TermVectorComponent in combinition with ExactStatsCache
> -
>
> Key: SOLR-8459
> URL: https://issues.apache.org/jira/browse/SOLR-8459
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3
>Reporter: Andreas Daffner
> Attachments: SOLR-8459.patch
>
>
> Hello,
> I am getting a NPE when using the TermVectorComponent in combinition with 
> ExactStatsCache.
> I am using SOLR 5.3.0 with 4 shards in total.
> I set up my solrconfig.xml as described in these 2 links:
> TermVectorComponent:
> https://cwiki.apache.org/confluence/display/solr/The+Term+Vector+Component
> ExactStatsCache:
> https://cwiki.apache.org/confluence/display/solr/Distributed+Requests#Configuring+statsCache+implementation
> My snippets from solrconfig.xml:
> {code}
> ...
>   
>   
>   
>class="org.apache.solr.handler.component.TermVectorComponent"/>
>class="org.apache.solr.handler.component.SearchHandler">
> 
>   true
> 
> 
>   tvComponent
> 
>   
> ...
> {code}
> Unfortunately a request to SOLR like 
> "http://host/solr/corename/tvrh?q=site_url_id:74; ends up with this NPE:
> {code}
> 4329458 ERROR (qtp59559151-17) [c:SingleDomainSite_11 s:shard1 r:core_node1 
> x:SingleDomainSite_11_shard1_replica1] o.a.s.c.SolrCore 
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:454)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:416)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:210)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:499)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)
> {code}
> According to https://issues.apache.org/jira/browse/SOLR-7756 this Bug should 
> be fixed with SOLR 5.3.0, but obviously this NPE is still present.
> Can you please help me here?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8418) BoostQuery cannot be cast to TermQuery

2016-01-05 Thread Jens Wille (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082836#comment-15082836
 ] 

Jens Wille commented on SOLR-8418:
--

[~andyetitmoves], I've verified that your commit fixes the issue. Thanks for 
taking care of it.

> BoostQuery cannot be cast to TermQuery
> --
>
> Key: SOLR-8418
> URL: https://issues.apache.org/jira/browse/SOLR-8418
> Project: Solr
>  Issue Type: Bug
>  Components: MoreLikeThis
>Affects Versions: 5.4
>Reporter: Jens Wille
>Assignee: Ramkumar Aiyengar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8418.patch, SOLR-8418.patch
>
>
> As a consequence of LUCENE-6590, {{MoreLikeThisHandler}} was changed in 
> r1701621 to use the new API. In SOLR-7912, I adapted that code for 
> {{CloudMLTQParser}} and {{SimpleMLTQParser}}. However, setting the {{boost}} 
> parameter just failed for me after updating to 5.4 with the following error 
> message:
> {code}
> java.lang.ClassCastException: org.apache.lucene.search.BoostQuery cannot be 
> cast to org.apache.lucene.search.TermQuery
> at 
> org.apache.solr.search.mlt.SimpleMLTQParser.parse(SimpleMLTQParser.java:139)
> at org.apache.solr.search.QParser.getQuery(QParser.java:141)
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:254)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8483) tweak open-exchange-rates.json test-file to avoid OpenExchangeRatesOrgProvider.java warnings

2016-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082835#comment-15082835
 ] 

ASF subversion and git services commented on SOLR-8483:
---

Commit 1723040 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1723040 ]

SOLR-8483: relocate 'IMPORTANT NOTE' in open-exchange-rates.json test-file to 
avoid OpenExchangeRatesOrgProvider.java warnings.

> tweak open-exchange-rates.json test-file to avoid 
> OpenExchangeRatesOrgProvider.java warnings
> 
>
> Key: SOLR-8483
> URL: https://issues.apache.org/jira/browse/SOLR-8483
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8483.patch
>
>
> Tweak the {{open-exchange-rates.json}} test file so that 
> {{OpenExchangeRatesOrgProvider}} does not emit {{'Unknown key IMPORTANT 
> NOTE'}} and {{'Expected key, got STRING'}} warnings which can be confusing 
> when investigating unrelated test failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8418) BoostQuery cannot be cast to TermQuery

2016-01-05 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082866#comment-15082866
 ] 

Ramkumar Aiyengar commented on SOLR-8418:
-

Cool, thanks for confirming!

> BoostQuery cannot be cast to TermQuery
> --
>
> Key: SOLR-8418
> URL: https://issues.apache.org/jira/browse/SOLR-8418
> Project: Solr
>  Issue Type: Bug
>  Components: MoreLikeThis
>Affects Versions: 5.4
>Reporter: Jens Wille
>Assignee: Ramkumar Aiyengar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8418.patch, SOLR-8418.patch
>
>
> As a consequence of LUCENE-6590, {{MoreLikeThisHandler}} was changed in 
> r1701621 to use the new API. In SOLR-7912, I adapted that code for 
> {{CloudMLTQParser}} and {{SimpleMLTQParser}}. However, setting the {{boost}} 
> parameter just failed for me after updating to 5.4 with the following error 
> message:
> {code}
> java.lang.ClassCastException: org.apache.lucene.search.BoostQuery cannot be 
> cast to org.apache.lucene.search.TermQuery
> at 
> org.apache.solr.search.mlt.SimpleMLTQParser.parse(SimpleMLTQParser.java:139)
> at org.apache.solr.search.QParser.getQuery(QParser.java:141)
> at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:160)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:254)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:156)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2073)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:658)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:457)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:222)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:181)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6956) TestBKDTree.testRandomMedium() failure: some hits were wrong

2016-01-05 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082856#comment-15082856
 ] 

Michael McCandless commented on LUCENE-6956:


OK I finally isolated this test failure to an apparently bug/lossy double point 
precision issue in {{GeoRelationUtils.rectCrossesPoly}}; if you add this test 
case to {{TestGeoUtils.java}}, the 2nd (last) assert in the test fails:

{noformat}
  public void testBKDCase1() throws Exception {
double[] polyLats = new double[] {86.83882305398583, 86.8827043287456, 
86.8827043287456, 86.83882305398583, 86.83882305398583};
double[] polyLons = new double[] {-9.594408497214317, -9.594408497214317, 
-8.752231243997812, -8.752231243997812, -9.594408497214317};
double polyMinLat = 86.83882305398583;
double polyMaxLat = 86.8827043287456;
double polyMinLon = -9.594408497214317;
double polyMaxLon = -8.752231243997812;

double cellMinLat = -89.9997904524;
double cellMaxLat = 86.8827033836041;
double cellMinLon = -179.9995809048;
double cellMaxLon = 179.9995809048;

// Cell is massive vs small poly so it's definitely NOT within:
assertFalse(GeoRelationUtils.rectWithinPoly(cellMinLon, cellMinLat, 
cellMaxLon, cellMaxLat, polyLons, polyLats, polyMinLon, polyMinLat, polyMaxLon, 
polyMaxLat));

// But cell does cross the poly (barely!):
assertTrue(GeoRelationUtils.rectCrossesPoly(cellMinLon, cellMinLat, 
cellMaxLon, cellMaxLat, polyLons, polyLats, polyMinLon, polyMinLat, polyMaxLon, 
polyMaxLat));
  }
{noformat}

I think the problem is that the {{polyMaxLat}} is just a wee bit over 
{{cellMaxLat}} and floating point errors in {{lineCrossesLine}} must then 
incorrectly conclude the poly is fully contained inside the cell?

This causes BKD tree on recursing to incorrectly skip a whole part of the 
sub-tree, missing hits that are well within the query polygon.

[~nknize] any ideas what we can do to fix this?


> TestBKDTree.testRandomMedium() failure: some hits were wrong
> 
>
> Key: LUCENE-6956
> URL: https://issues.apache.org/jira/browse/LUCENE-6956
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Michael McCandless
>
> My Jenkins found a reproducible seed for a failure of 
> {{TestBKDTree.testRandomMedium()}} on branch_5x with Java8:
> {noformat}
>   [junit4] Suite: org.apache.lucene.bkdtree.TestBKDTree
>[junit4]   1> T1: id=29784 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29528
>[junit4]   1>   lat=86.88086835667491 lon=-8.821268286556005
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29801 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29545
>[junit4]   1>   lat=86.88149104826152 lon=-9.34366637840867
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=29961 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29705
>[junit4]   1>   lat=86.8706679996103 lon=-9.38328042626381
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30015 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29759
>[junit4]   1>   lat=86.84762765653431 lon=-9.44802425801754
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30017 should match but did not
>[junit4]   1>   small=true query=BKDPointInPolygonQuery: field=point: 
> Points: [-9.594408497214317, 86.83882305398583] [-9.594408497214317, 
> 86.8827043287456] [-8.752231243997812, 86.8827043287456] [-8.752231243997812, 
> 86.83882305398583] [-9.594408497214317, 86.83882305398583]  docID=29761
>[junit4]   1>   lat=86.8753323610872 lon=-9.091365560889244
>[junit4]   1>   deleted?=false
>[junit4]   1> T1: id=30042 should match but did not
>[junit4]   1>   small=true 

[jira] [Updated] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8492:
-
Description: 
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards for 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

  was:
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards for 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.


> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>
> This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to shards and executes the LogisticRegressionQuery. The model data is 
> collected from the shards and the weights are averaged and sent back to the 
> shards for the next iteration. Each call to read() returns a Tuple with the 
> averaged weights and error from the shards. With this approach the 
> LogitStream streams the changing model back to the client after each 
> iteration.
> The LogitStream will return the EOF Tuple when it reaches 

[jira] [Updated] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8492:
-
Attachment: SOLR-8492.patch

> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Attachments: SOLR-8492.patch
>
>
> This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to shards and executes the LogisticRegressionQuery. The model data is 
> collected from the shards and the weights are averaged and sent back to the 
> shards for the next iteration. Each call to read() returns a Tuple with the 
> averaged weights and error from the shards. With this approach the 
> LogitStream streams the changing model back to the client after each 
> iteration.
> The LogitStream will return the EOF Tuple when it reaches the defined 
> maxIterations. When sent as a Streaming Expression to the Stream handler this 
> provides parallel iterative behavior. This same approach can be used to 
> implement other parallel iterative algorithms.
> The initial patch has  a test which simply tests the mechanics of the 
> iteration. More work will need to be done to ensure the SGD is properly 
> implemented. The distributed approach of the SGD will also need to be 
> reviewed.  
> This implementation is designed for use cases with a small number of features 
> because each feature is it's own discreet field.
> An implementation which supports a higher number of features would be 
> possible by packing features into a byte array and storing as binary 
> DocValues.
> This implementation is designed to support a large sample set. With a large 
> number of shards, a sample set into the billions may be possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8492:
-
Description: 
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards with 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

sample Streaming Expression Syntax:

{code}

logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")

{code}



  was:
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards with 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.


> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Attachments: SOLR-8492.patch
>
>
> This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to shards and executes the LogisticRegressionQuery. The model data is 
> collected from the shards and the weights are averaged and sent back to the 
> shards with the next iteration. Each call to read() returns a Tuple with the 
> averaged weights and error from the 

[jira] [Commented] (SOLR-8451) We should not call method.abort in HttpSolrClient.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083852#comment-15083852
 ] 

Mark Miller commented on SOLR-8451:
---

I have a connection reuse test that hits HttpSolrClient, CloudSolrClient, and 
ConcurrentUpdateSolrClient. Once I polish it up a little, I'll commit it with 
this issue.

> We should not call method.abort in HttpSolrClient.
> --
>
> Key: SOLR-8451
> URL: https://issues.apache.org/jira/browse/SOLR-8451
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8451.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8492:
-
Description: 
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards with 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

  was:
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards for 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.


> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Attachments: SOLR-8492.patch
>
>
> This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to shards and executes the LogisticRegressionQuery. The model data is 
> collected from the shards and the weights are averaged and sent back to the 
> shards with the next iteration. Each call to read() returns a Tuple with the 
> averaged weights and error from the shards. With this approach the 
> LogitStream streams the changing model back to the client after each 
> iteration.
> The LogitStream 

[jira] [Updated] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8492:
-
Description: 
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to the shards and executes the LogisticRegressionQuery. The model data is 
collected from the shards and the weights are averaged and sent back to the 
shards with the next iteration. Each call to read() returns a Tuple with the 
averaged weights and error from the shards. With this approach the LogitStream 
streams the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

sample Streaming Expression Syntax:

{code}

logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")

{code}



  was:
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards with 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

sample Streaming Expression Syntax:

{code}

logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")

{code}




> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Attachments: SOLR-8492.patch
>
>
> This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to the shards and executes the LogisticRegressionQuery. The model data 
> is collected from the shards and the weights are averaged and 

[jira] [Created] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-8492:


 Summary: Add LogisticRegressionQuery and LogitStream
 Key: SOLR-8492
 URL: https://issues.apache.org/jira/browse/SOLR-8492
 Project: Solr
  Issue Type: New Feature
Reporter: Joel Bernstein


This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to shards and executes the LogisticRegressionQuery. The model data is collected 
from the shards and the weights are averaged and sent back to the shards for 
the next iteration. Each call to read() returns a Tuple with the averaged 
weights and error from the shards. With this approach the LogitStream streams 
the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8489) TestMiniSolrCloudCluster.createCollection to support extra & alternative collectionProperties

2016-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-8489.
---
   Resolution: Fixed
Fix Version/s: Trunk
   5.5

> TestMiniSolrCloudCluster.createCollection to support extra & alternative 
> collectionProperties
> -
>
> Key: SOLR-8489
> URL: https://issues.apache.org/jira/browse/SOLR-8489
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8489.patch
>
>
> * add optional collectionProperties map arg and use putIfAbsent instead of 
> put with the map
> * move persistIndex i.e. solr.directoryFactory randomisation from the several 
> callers to just-once in createCollection
> These changes are refactors only and intended to *not* change the existing 
> tests' behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8489) TestMiniSolrCloudCluster.createCollection to support extra & alternative collectionProperties

2016-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083837#comment-15083837
 ] 

ASF subversion and git services commented on SOLR-8489:
---

Commit 1723170 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1723170 ]

SOLR-8489: TestMiniSolrCloudCluster.createCollection to support extra & 
alternative collectionProperties (merge in revision 1723162 from trunk)

> TestMiniSolrCloudCluster.createCollection to support extra & alternative 
> collectionProperties
> -
>
> Key: SOLR-8489
> URL: https://issues.apache.org/jira/browse/SOLR-8489
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8489.patch
>
>
> * add optional collectionProperties map arg and use putIfAbsent instead of 
> put with the map
> * move persistIndex i.e. solr.directoryFactory randomisation from the several 
> callers to just-once in createCollection
> These changes are refactors only and intended to *not* change the existing 
> tests' behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8492) Add LogisticRegressionQuery and LogitStream

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8492:
-
Description: 
This ticket is to add a new query called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to the shards and executes the LogisticRegressionQuery. The model data is 
collected from the shards and the weights are averaged and sent back to the 
shards with the next iteration. Each call to read() returns a Tuple with the 
averaged weights and error from the shards. With this approach the LogitStream 
streams the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

sample Streaming Expression Syntax:

{code}

logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")

{code}



  was:
This ticket is to add a new query type called a LogisticRegressionQuery (LRQ).

The LRQ extends AnalyticsQuery 
(http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html) 
and returns a DelegatingCollector that implements a Stochastic Gradient Descent 
(SGD) optimizer for Logistic Regression.

This ticket also adds the LogitStream which leverages Streaming Expressions to 
provide iteration over the shards. Each call to LogitStream.read() calls down 
to the shards and executes the LogisticRegressionQuery. The model data is 
collected from the shards and the weights are averaged and sent back to the 
shards with the next iteration. Each call to read() returns a Tuple with the 
averaged weights and error from the shards. With this approach the LogitStream 
streams the changing model back to the client after each iteration.

The LogitStream will return the EOF Tuple when it reaches the defined 
maxIterations. When sent as a Streaming Expression to the Stream handler this 
provides parallel iterative behavior. This same approach can be used to 
implement other parallel iterative algorithms.

The initial patch has  a test which simply tests the mechanics of the 
iteration. More work will need to be done to ensure the SGD is properly 
implemented. The distributed approach of the SGD will also need to be reviewed. 
 

This implementation is designed for use cases with a small number of features 
because each feature is it's own discreet field.

An implementation which supports a higher number of features would be possible 
by packing features into a byte array and storing as binary DocValues.

This implementation is designed to support a large sample set. With a large 
number of shards, a sample set into the billions may be possible.

sample Streaming Expression Syntax:

{code}

logit(collection1, features="a,b,c,d,e,f" outcome="x" maxIterations="80")

{code}




> Add LogisticRegressionQuery and LogitStream
> ---
>
> Key: SOLR-8492
> URL: https://issues.apache.org/jira/browse/SOLR-8492
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
> Attachments: SOLR-8492.patch
>
>
> This ticket is to add a new query called a LogisticRegressionQuery (LRQ).
> The LRQ extends AnalyticsQuery 
> (http://joelsolr.blogspot.com/2015/12/understanding-solrs-analyticsquery.html)
>  and returns a DelegatingCollector that implements a Stochastic Gradient 
> Descent (SGD) optimizer for Logistic Regression.
> This ticket also adds the LogitStream which leverages Streaming Expressions 
> to provide iteration over the shards. Each call to LogitStream.read() calls 
> down to the shards and executes the LogisticRegressionQuery. The model data 
> is collected from the shards and the weights are averaged and sent 

[jira] [Created] (SOLR-8489) TestMiniSolrCloudCluster.createCollection to support extra & alternative collectionProperties

2016-01-05 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-8489:
-

 Summary: TestMiniSolrCloudCluster.createCollection to support 
extra & alternative collectionProperties
 Key: SOLR-8489
 URL: https://issues.apache.org/jira/browse/SOLR-8489
 Project: Solr
  Issue Type: Test
Reporter: Christine Poerschke
Assignee: Christine Poerschke
Priority: Minor


* add optional collectionProperties map arg and use putIfAbsent instead of put 
with the map
* move persistIndex i.e. solr.directoryFactory randomisation from the several 
callers to just-once in createCollection

These changes are refactors only and intended to *not* change the existing 
tests' behaviour.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8489) TestMiniSolrCloudCluster.createCollection to support extra & alternative collectionProperties

2016-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-8489:
--
Attachment: SOLR-8489.patch

> TestMiniSolrCloudCluster.createCollection to support extra & alternative 
> collectionProperties
> -
>
> Key: SOLR-8489
> URL: https://issues.apache.org/jira/browse/SOLR-8489
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8489.patch
>
>
> * add optional collectionProperties map arg and use putIfAbsent instead of 
> put with the map
> * move persistIndex i.e. solr.directoryFactory randomisation from the several 
> callers to just-once in createCollection
> These changes are refactors only and intended to *not* change the existing 
> tests' behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3141) Deprecate OPTIMIZE command in Solr

2016-01-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-3141:
--
Attachment: SOLR-3141.patch

Attached slightly modified patch. Will commit to trunk only tomorrow if no 
objections.

Unless someone feel inclined to implement more code changes to this issue, I'll 
rename and close this JIRA after commit of the log patch.

> Deprecate OPTIMIZE command in Solr
> --
>
> Key: SOLR-3141
> URL: https://issues.apache.org/jira/browse/SOLR-3141
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 3.5
>Reporter: Jan Høydahl
>  Labels: force, optimize
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-3141.patch, SOLR-3141.patch, SOLR-3141.patch
>
>
> Background: LUCENE-3454 renames optimize() as forceMerge(). Please read that 
> issue first.
> Now that optimize() is rarely necessary anymore, and renamed in Lucene APIs, 
> what should be done with Solr's ancient optimize command?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3141) Deprecate OPTIMIZE command in Solr

2016-01-05 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-3141:
-

Assignee: Jan Høydahl

> Deprecate OPTIMIZE command in Solr
> --
>
> Key: SOLR-3141
> URL: https://issues.apache.org/jira/browse/SOLR-3141
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 3.5
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: force, optimize
> Fix For: 4.9, Trunk
>
> Attachments: SOLR-3141.patch, SOLR-3141.patch, SOLR-3141.patch
>
>
> Background: LUCENE-3454 renames optimize() as forceMerge(). Please read that 
> issue first.
> Now that optimize() is rarely necessary anymore, and renamed in Lucene APIs, 
> what should be done with Solr's ancient optimize command?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7733) remove/rename "optimize" references in the UI.

2016-01-05 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083041#comment-15083041
 ] 

Jan Høydahl commented on SOLR-7733:
---

The current UI (both classic and Angular) still has a green "Optimized" 
checkmark, which seems to always stay green (both on overview page and on core 
admin page). Should we get rid of them?

Also, the Angular UI removes the "Optimize" button from the Core Admin page. I 
vote for bringing the button back, but with an educational popup

{panel:title=Are you 
sure?|borderStyle=dashed|borderColor=#ccc|titleBGColor=#F7D6C1|bgColor=#CE}This
 will read and write the *entire index*, merging all documents into one 
segment, and can be very expensive{panel}

Related: I have many times missed a {{Commit}} button in the core admin and 
collections tabs. What do you think?

> remove/rename "optimize" references in the UI.
> --
>
> Key: SOLR-7733
> URL: https://issues.apache.org/jira/browse/SOLR-7733
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Affects Versions: 5.3, Trunk
>Reporter: Erick Erickson
>Assignee: Upayavira
>Priority: Minor
> Attachments: SOLR-7733.patch
>
>
> Since optimizing indexes is kind of a special circumstance thing, what do we 
> think about removing (or renaming) optimize-related stuff on the core admin 
> and core overview pages? The "optimize" button is already gone from the core 
> admin screen (was this intentional?).
> My personal feeling is that we should remove this entirely as it's too easy 
> to think "Of course I want my index optimized" and "look, this screen says my 
> index isn't optimized, that must mean I should optimize it".
> The core admin screen and the core overview page both have an "optimized" 
> checkmark, I propose just removing it from the "overview" page and on the 
> "core admin" page changing it to "Segment Count #". NOTE: the "overview" page 
> already has a "Segment Count" entry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_66) - Build # 5394 - Still Failing!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5394/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseG1GC

8 tests failed.
FAILED:  org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test

Error Message:
partialResults were expected expected: but was:

Stack Trace:
java.lang.AssertionError: partialResults were expected expected: but 
was:
at 
__randomizedtesting.SeedInfo.seed([AB6E62193E7C069E:233A5DC390806B66]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertPartialResults(CloudExitableDirectoryReaderTest.java:102)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:73)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.test(CloudExitableDirectoryReaderTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (SOLR-8450) Internal HttpClient used in SolrJ is retries requests by default

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15084496#comment-15084496
 ] 

Mark Miller commented on SOLR-8450:
---

This was causing my new connection reuse test in SOLR-8451 to fail on trunk 
(only with the jetty upgrade).

It seems that we were retrying on ConcurrentUpdateSolrClient requests. I had 
expected those retries to fail as non retriable.

Here is a patch with a subset of changes from SOLR-8451. We can use chunked 
encoding to detect streaming if we start using the content stream sizes in 
HttpSolrClient (which is more efficient anyway?).

> Internal HttpClient used in SolrJ is retries requests by default
> 
>
> Key: SOLR-8450
> URL: https://issues.apache.org/jira/browse/SOLR-8450
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Reporter: Shalin Shekhar Mangar
> Fix For: 5.5, Trunk
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8450) Internal HttpClient used in SolrJ is retries requests by default

2016-01-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8450:
--
Attachment: SOLR-8450.patch

> Internal HttpClient used in SolrJ is retries requests by default
> 
>
> Key: SOLR-8450
> URL: https://issues.apache.org/jira/browse/SOLR-8450
> Project: Solr
>  Issue Type: Bug
>  Components: clients - java, SolrJ
>Reporter: Shalin Shekhar Mangar
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8450.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8489) TestMiniSolrCloudCluster.createCollection to support extra & alternative collectionProperties

2016-01-05 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15084514#comment-15084514
 ] 

Steve Rowe commented on SOLR-8489:
--

Compilation on branch_5x is failing while compiling 
{{TestMiniSolrCloudCluster.java}}.  See e.g. 
https://builds.apache.org/job/Lucene-Solr-Maven-5.x/1152/.

{{ant compile-test}} fails for me - {{Map.putIfAbsent()}} was added in Java8:

{noformat}
common.compile-test:
[javac] Compiling 7 source files to 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/build/solr-core/classes/test
[javac] 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:103:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent(CoreDescriptor.CORE_CONFIG, 
"solrconfig-tlog.xml");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:104:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.maxBufferedDocs", 
"10");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:105:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.ramBufferSizeMB", 
"100");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:107:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergePolicy", 
"org.apache.lucene.index.TieredMergePolicy");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:108:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.tests.mergeScheduler", 
"org.apache.lucene.index.ConcurrentMergeScheduler");
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/cloud/TestMiniSolrCloudCluster.java:109:
 error: cannot find symbol
[javac] collectionProperties.putIfAbsent("solr.directoryFactory", 
(persistIndex ? "solr.StandardDirectoryFactory" : "solr.RAMDirectoryFactory"));
[javac] ^
[javac]   symbol:   method putIfAbsent(String,String)
[javac]   location: variable collectionProperties of type Map
[javac] Note: 
/Users/sarowe/svn/lucene/dev/branches/branch_5x/solr/core/src/test/org/apache/solr/search/mlt/CloudMLTQParserTest.java
 uses unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 6 errors
{noformat}

> TestMiniSolrCloudCluster.createCollection to support extra & alternative 
> collectionProperties
> -
>
> Key: SOLR-8489
> URL: https://issues.apache.org/jira/browse/SOLR-8489
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8489.patch
>
>
> * add optional collectionProperties map arg and use putIfAbsent instead of 
> put with the map
> * move persistIndex i.e. solr.directoryFactory randomisation from the several 
> callers to just-once in createCollection
> These changes are refactors only and intended to *not* change the existing 
> tests' behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.3 - Build # 5 - Still Failing

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.3/5/

2 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
GC overhead limit exceeded

Stack Trace:
java.lang.OutOfMemoryError: GC overhead limit exceeded
at 
__randomizedtesting.SeedInfo.seed([667D296926F4C3DA:EE2916B38808AE22]:0)
at java.util.HashMap.entrySet0(HashMap.java:1073)
at java.util.HashMap.entrySet(HashMap.java:1068)
at java.util.AbstractMap.hashCode(AbstractMap.java:492)
at java.util.HashMap.hash(HashMap.java:362)
at java.util.HashMap.put(HashMap.java:492)
at java.util.HashSet.add(HashSet.java:217)
at 
org.apache.solr.cloud.CloudInspectUtil.showDiff(CloudInspectUtil.java:125)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:206)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:167)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:677)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:153)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)


FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=15287, name=collection0, 
state=RUNNABLE, group=TGRP-HdfsCollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=15287, name=collection0, state=RUNNABLE, 
group=TGRP-HdfsCollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:42995: Could not find collection : 
awholynewstresscollection_collection0_0
at __randomizedtesting.SeedInfo.seed([667D296926F4C3DA]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:376)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:328)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1098)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:869)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:805)
at 

[jira] [Updated] (SOLR-839) XML Query Parser support

2016-01-05 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-839:
-
Affects Version/s: 5.4

> XML Query Parser support
> 
>
> Key: SOLR-839
> URL: https://issues.apache.org/jira/browse/SOLR-839
> Project: Solr
>  Issue Type: New Feature
>  Components: query parsers
>Affects Versions: 1.3, 5.4, Trunk
>Reporter: Erik Hatcher
>Assignee: Erik Hatcher
>Priority: Minor
> Fix For: Trunk
>
> Attachments: SOLR-839-object-parser.patch, SOLR-839.patch, 
> SOLR-839.patch, lucene-xml-query-parser-2.4-dev.jar
>
>
> Lucene contrib includes a query parser that is able to create the 
> full-spectrum of Lucene queries, using an XML data structure.
> This patch adds "xml" query parser support to Solr.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8491) solr.cmd SOLR_SSL_OPTS is overwritten

2016-01-05 Thread Sam Yi (JIRA)
Sam Yi created SOLR-8491:


 Summary: solr.cmd SOLR_SSL_OPTS is overwritten
 Key: SOLR-8491
 URL: https://issues.apache.org/jira/browse/SOLR-8491
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.2, Trunk
 Environment: Windows
Reporter: Sam Yi


In solr.cmd, the SOLR_SSL_OPTS variable is assigned within a block, but then 
assigned again later in the same block, using {{%SOLR_SSL_OPTS%}} to attempt to 
append to itself.  However, since we're still inside the same block for this 
2nd assignment, {{%SOLR_SSL_OPTS%}} resolves to nothing, so everything in the 
first assignment (the solr.jetty opts) becomes overwritten.

I was able to work around this by using {code}!SOLR_SSL_OPTS!{code} instead of 
{{%SOLR_SSL_OPTS%}} in the 2nd assignments (in both the {{IF}} and {{ELSE}} 
blocks), since delayed expension is enabled.

Here's the full block for reference, from commit 
d4e3f50a6f6bc7b96fa6317f028ae26be25c8928, lines 43-55:
{code}IF DEFINED SOLR_SSL_KEY_STORE (
  set "SOLR_JETTY_CONFIG=--module=https"
  set SOLR_URL_SCHEME=https
  set "SCRIPT_ERROR=Solr server directory %SOLR_SERVER_DIR% not found!"
  set "SOLR_SSL_OPTS=-Dsolr.jetty.keystore=%SOLR_SSL_KEY_STORE% 
-Dsolr.jetty.keystore.password=%SOLR_SSL_KEY_STORE_PASSWORD% 
-Dsolr.jetty.truststore=%SOLR_SSL_TRUST_STORE% 
-Dsolr.jetty.truststore.password=%SOLR_SSL_TRUST_STORE_PASSWORD% 
-Dsolr.jetty.ssl.needClientAuth=%SOLR_SSL_NEED_CLIENT_AUTH% 
-Dsolr.jetty.ssl.wantClientAuth=%SOLR_SSL_WANT_CLIENT_AUTH%"
  IF DEFINED SOLR_SSL_CLIENT_KEY_STORE  (
set "SOLR_SSL_OPTS=%SOLR_SSL_OPTS% 
-Djavax.net.ssl.keyStore=%SOLR_SSL_CLIENT_KEY_STORE% 
-Djavax.net.ssl.keyStorePassword=%SOLR_SSL_CLIENT_KEY_STORE_PASSWORD% 
-Djavax.net.ssl.trustStore=%SOLR_SSL_CLIENT_TRUST_STORE% 
-Djavax.net.ssl.trustStorePassword=%SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD%"
  ) ELSE (
set "SOLR_SSL_OPTS=%SOLR_SSL_OPTS% 
-Djavax.net.ssl.keyStore=%SOLR_SSL_KEY_STORE% 
-Djavax.net.ssl.keyStorePassword=%SOLR_SSL_KEY_STORE_PASSWORD% 
-Djavax.net.ssl.trustStore=%SOLR_SSL_TRUST_STORE% 
-Djavax.net.ssl.trustStorePassword=%SOLR_SSL_TRUST_STORE_PASSWORD%"
  )
) ELSE (
  set SOLR_SSL_OPTS=
)
{code}




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7733) remove/rename "optimize" references in the UI.

2016-01-05 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083736#comment-15083736
 ] 

Shawn Heisey commented on SOLR-7733:


bq. I vote for bringing the button back, but with an educational popup

Sounds good to me.  If the server is in cloud mode and the button is pressed on 
a core, the dialog might want to mention that it will in fact optimize the 
entire collection.  There is no way to disable this -- distrib=false is not 
honored.  I thought we had an issue to have optimize on SolrCloud honor 
distrib=false, but I can't find one.

bq. I have many times missed a Commit button in the core admin and collections 
tabs

That would be interesting.  Since I am reasonably sure that mechanisms are in 
place to ignore a commit operation when the index hasn't actually changed, this 
is probably a safe thing to add, and would be helpful for troubleshooting.


> remove/rename "optimize" references in the UI.
> --
>
> Key: SOLR-7733
> URL: https://issues.apache.org/jira/browse/SOLR-7733
> Project: Solr
>  Issue Type: Improvement
>  Components: UI
>Affects Versions: 5.3, Trunk
>Reporter: Erick Erickson
>Assignee: Upayavira
>Priority: Minor
> Attachments: SOLR-7733.patch
>
>
> Since optimizing indexes is kind of a special circumstance thing, what do we 
> think about removing (or renaming) optimize-related stuff on the core admin 
> and core overview pages? The "optimize" button is already gone from the core 
> admin screen (was this intentional?).
> My personal feeling is that we should remove this entirely as it's too easy 
> to think "Of course I want my index optimized" and "look, this screen says my 
> index isn't optimized, that must mean I should optimize it".
> The core admin screen and the core overview page both have an "optimized" 
> checkmark, I propose just removing it from the "overview" page and on the 
> "core admin" page changing it to "Segment Count #". NOTE: the "overview" page 
> already has a "Segment Count" entry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8489) TestMiniSolrCloudCluster.createCollection to support extra & alternative collectionProperties

2016-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15083763#comment-15083763
 ] 

ASF subversion and git services commented on SOLR-8489:
---

Commit 1723162 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1723162 ]

SOLR-8489: TestMiniSolrCloudCluster.createCollection to support extra & 
alternative collectionProperties

> TestMiniSolrCloudCluster.createCollection to support extra & alternative 
> collectionProperties
> -
>
> Key: SOLR-8489
> URL: https://issues.apache.org/jira/browse/SOLR-8489
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8489.patch
>
>
> * add optional collectionProperties map arg and use putIfAbsent instead of 
> put with the map
> * move persistIndex i.e. solr.directoryFactory randomisation from the several 
> callers to just-once in createCollection
> These changes are refactors only and intended to *not* change the existing 
> tests' behaviour.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #1152: POMs out of sync

2016-01-05 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/1152/

No tests ran.

Build Log:
[...truncated 39672 lines...]
  [mvn] [INFO] -
  [mvn] [INFO] -
  [mvn] [ERROR] COMPILATION ERROR : 
  [mvn] [INFO] -

[...truncated 845 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:817: The 
following error occurred while executing this line:
: Java returned: 1

Total time: 19 minutes 49 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-8451) We should not call method.abort in HttpSolrClient.

2016-01-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8451:
--
Attachment: SOLR-8451.patch

Patch with connection reuse test attached. This new test won't work until we 
address the troublesome Jetty upgrade.

> We should not call method.abort in HttpSolrClient.
> --
>
> Key: SOLR-8451
> URL: https://issues.apache.org/jira/browse/SOLR-8451
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8451.patch, SOLR-8451.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6938) Convert build to work with Git rather than SVN.

2016-01-05 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15084077#comment-15084077
 ] 

Mark Miller commented on LUCENE-6938:
-

We don't need to vote yet - that only happens when consensus fails or someone 
wants to force something. We can warn the dev list again to make sure everyone 
is caught up, but no need to force a vote unless someone comes out against. 
There is a very visible discussion and a few JIRA issues that have been in 
progress for a long time now. Once we are ready to go, we can sum things up in 
a new dev thread.

I think in terms of what needs to be covered here, Uwe has detailed it pretty 
well. We want all the targets to work really - or to understand why any target 
does not work. We can wait for Uwe to create a new git validator though - all 
targets still work without that. 'svn' does not really have a very deep imprint 
in our build targets.

I think the main thing left to do in this issue is put the git hash in 
efficiently.

Some other things people are concerned about can get further JIRA issues, but I 
imagine a lot of that (such as python scripts) can be updated as used / needed 
by those that use them.

> Convert build to work with Git rather than SVN.
> ---
>
> Key: LUCENE-6938
> URL: https://issues.apache.org/jira/browse/LUCENE-6938
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: LUCENE-6938.patch
>
>
> We assume an SVN checkout in parts of our build and will need to move to 
> assuming a Git checkout.
> Patches against https://github.com/dweiss/lucene-solr-svn2git from 
> LUCENE-6933.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8451) We should not call method.abort in HttpSolrClient.

2016-01-05 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-8451:
--
Attachment: SOLR-8451.patch

> We should not call method.abort in HttpSolrClient.
> --
>
> Key: SOLR-8451
> URL: https://issues.apache.org/jira/browse/SOLR-8451
> Project: Solr
>  Issue Type: Bug
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8451.patch, SOLR-8451.patch, SOLR-8451.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_66) - Build # 5524 - Failure!

2016-01-05 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5524/
Java: 32bit/jdk1.8.0_66 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
[index.20160106004517175, index.20160106004518438, index.properties, 
replication.properties] expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: [index.20160106004517175, index.20160106004518438, 
index.properties, replication.properties] expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([E9A85FB0F2D22DA7:32035F76F7FA4414]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:820)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:787)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Closed] (SOLR-7535) Add UpdateStream to Streaming API and Streaming Expression

2016-01-05 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein closed SOLR-7535.

Resolution: Fixed

> Add UpdateStream to Streaming API and Streaming Expression
> --
>
> Key: SOLR-7535
> URL: https://issues.apache.org/jira/browse/SOLR-7535
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java, SolrJ
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, SOLR-7535.patch, 
> SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8475) Some refactoring to SolrIndexSearcher

2016-01-05 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082968#comment-15082968
 ] 

Christine Poerschke commented on SOLR-8475:
---

bq. If it's possible to leave deprecated inner classes extending the extracted 
classes, then existing user code should work just fine. I haven't attempted to 
do this, but I think that should work.

I am in the process of attempting this for {{QueryCommand}} only (since my 
unrelated SOLR-8482 change also concerns that class), hoping to post patch(es) 
later today.

> Some refactoring to SolrIndexSearcher
> -
>
> Key: SOLR-8475
> URL: https://issues.apache.org/jira/browse/SOLR-8475
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Shai Erera
>Assignee: Shai Erera
>Priority: Minor
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8475.patch, SOLR-8475.patch, SOLR-8475.patch, 
> SOLR-8475.patch, SOLR-8475.patch
>
>
> While reviewing {{SolrIndexSearcher}}, I started to correct a thing here and 
> there, and eventually it led to these changes:
> * Moving {{QueryCommand}} and {{QueryResult}} to their own classes.
> * Moving FilterImpl into a private static class (was package-private and 
> defined in the same .java file, but separate class).
> * Some code formatting, imports organizing and minor log changes.
> * Removed fieldNames (handled the TODO in the code)
> * Got rid of usage of deprecated classes such as {{LegacyNumericUtils}} and 
> {{Legacy-*-Field}}.
> I wish we'd cut down the size of this file much more (it's 2500 lines now), 
> but I've decided to stop here so that the patch is manageable. I would like 
> to explore further refactorings afterwards, e.g. extracting cache management 
> code to an outer class (but keep {{SolrIndexSearcher}}'s API the same, if 
> possible).
> If you have additional ideas of more cleanups / simplifications, I'd be glad 
> to do them.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8483) tweak open-exchange-rates.json test-file to avoid OpenExchangeRatesOrgProvider.java warnings

2016-01-05 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15082981#comment-15082981
 ] 

ASF subversion and git services commented on SOLR-8483:
---

Commit 1723057 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1723057 ]

SOLR-8483: relocate 'IMPORTANT NOTE' in open-exchange-rates.json test-file to 
avoid OpenExchangeRatesOrgProvider.java warnings. (merge in revision 1723040 
from trunk)

> tweak open-exchange-rates.json test-file to avoid 
> OpenExchangeRatesOrgProvider.java warnings
> 
>
> Key: SOLR-8483
> URL: https://issues.apache.org/jira/browse/SOLR-8483
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8483.patch
>
>
> Tweak the {{open-exchange-rates.json}} test file so that 
> {{OpenExchangeRatesOrgProvider}} does not emit {{'Unknown key IMPORTANT 
> NOTE'}} and {{'Expected key, got STRING'}} warnings which can be confusing 
> when investigating unrelated test failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >