[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352399#comment-15352399
 ] 

Noble Paul commented on SOLR-7191:
--

bq.f there is a collection that has more than coreLoadThreadCount

or a 'shard' has more replicas? 

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-27 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352399#comment-15352399
 ] 

Noble Paul edited comment on SOLR-7191 at 6/28/16 5:48 AM:
---

bq. if there is a collection that has more than coreLoadThreadCount

or a 'shard' has more replicas? 


was (Author: noble.paul):
bq.f there is a collection that has more than coreLoadThreadCount

or a 'shard' has more replicas? 

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-27 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7191:
-
Attachment: SOLR-7191.patch

removed some unnecessary changes

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 986 - Still Failing!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/986/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CleanupOldIndexTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([18CCA68EFD9]:0)


FAILED:  org.apache.solr.cloud.CleanupOldIndexTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([18CCA68EFD9]:0)


FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:35100/solr/testschemaapi_shard1_replica2: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:35100/solr/testschemaapi_shard1_replica2: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([18CCA68EFD9:81CDAE5664948221]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:697)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-27 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352349#comment-15352349
 ] 

Erick Erickson commented on SOLR-7191:
--


Hmmm, is this a concern based on general principles or on a code path that is 
expected to fail?

I tested a couple of scenarios. All there are 4 JVM, 3 load threads. The "smoke 
test" was starting all the JVMs at once.

1> 100 collections, 4 shards x 4 replicas each
2> 10 collections, 4 shards x 40 replicas each.

Of course my testing could easily have missed the corner cases that trip this 
as it was pretty bare-bones.



> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9259) SimplePostTool: Improvements for posting hadoop hdfs files

2016-06-27 Thread lvchuanwen (JIRA)
lvchuanwen created SOLR-9259:


 Summary: SimplePostTool: Improvements for posting hadoop hdfs files
 Key: SOLR-9259
 URL: https://issues.apache.org/jira/browse/SOLR-9259
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: scripts and tools
Reporter: lvchuanwen


Add support for indexing hdfs files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build # 278 - Still Failing!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/278/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud

Error Message:
3 threads leaked from SUITE scope at 
org.apache.solr.handler.TestSolrConfigHandlerCloud: 1) Thread[id=6822, 
name=Thread-1993, state=TIMED_WAITING, group=TGRP-TestSolrConfigHandlerCloud]   
  at java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) 
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
 at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
 at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:107)
 at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)   
  at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:920) 
at org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2509)   
  at org.apache.solr.core.SolrCore$$Lambda$81/7198632.run(Unknown Source)   
  at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2405)2) 
Thread[id=7943, name=Thread-2076, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) 
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
 at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
 at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:107)
 at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)   
  at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:920) 
at org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2509)   
  at org.apache.solr.core.SolrCore$$Lambda$81/7198632.run(Unknown Source)   
  at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2405)3) 
Thread[id=7314, name=Thread-2025, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
 at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333) 
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
 at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
 at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:107)
 at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)   
  at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:920) 
at org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2509)   
  at org.apache.solr.core.SolrCore$$Lambda$81/7198632.run(Unknown Source)   
  at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2405)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 3 threads leaked from SUITE 
scope at org.apache.solr.handler.TestSolrConfigHandlerCloud: 
   1) Thread[id=6822, name=Thread-1993, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333)
at 
org.apache.solr.schema.IndexSchemaFactory.create(IndexSchemaFactory.java:48)
at 
org.apache.solr.schema.IndexSchemaFactory.buildIndexSchema(IndexSchemaFactory.java:75)
at 
org.apache.solr.core.ConfigSetService.createIndexSchema(ConfigSetService.java:107)
at 
org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:78)
at org.apache.solr.core.CoreContainer.reload(CoreContainer.java:920)
at 
org.apache.solr.core.SolrCore.lambda$getConfListener$6(SolrCore.java:2509)
at org.apache.solr.core.SolrCore$$Lambda$81/7198632.run(Unknown Source)
at org.apache.solr.cloud.ZkController$4.run(ZkController.java:2405)
   2) Thread[id=7943, name=Thread-2076, state=TIMED_WAITING, 
group=TGRP-TestSolrConfigHandlerCloud]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.ZkSolrResourceLoader.openResource(ZkSolrResourceLoader.java:101)
at 
org.apache.solr.core.SolrResourceLoader.openSchema(SolrResourceLoader.java:333)
at 

[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-27 Thread damien kamerman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352228#comment-15352228
 ] 

damien kamerman commented on SOLR-7191:
---

Only coreLoadThreadCount cores are registering at a time on each JVM, so
the concern is if there is a collection that has more than
coreLoadThreadCount replicas on a JVM then registration could fail.





-- 
Damien Kamerman


> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build # 5940 - Still Failing!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5940/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestSolrConfigHandlerCloud

Error Message:
ObjectTracker found 2 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 2 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([DB5CF795D701129B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor13.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 8 object(s) that were not released!!! 
[MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor, MockDirectoryWrapper, TransactionLog, 
TransactionLog, MDCAwareThreadPoolExecutor]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 8 object(s) that were not 
released!!! [MockDirectoryWrapper, MockDirectoryWrapper, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor, MockDirectoryWrapper, TransactionLog, 
TransactionLog, MDCAwareThreadPoolExecutor]
at __randomizedtesting.SeedInfo.seed([DB5CF795D701129B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[JENKINS] Lucene-Solr-Tests-6.x - Build # 292 - Failure

2016-06-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/292/

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testCommitWithin

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([F9FD33E48CF2DC83:432F5C9C0FDC3296]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
at 
org.apache.solr.update.AutoCommitTest.testCommitWithin(AutoCommitTest.java:325)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:774)
... 40 more




Build Log:
[...truncated 11595 lines...]
   [junit4] Suite: org.apache.solr.update.AutoCommitTest
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-27 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352216#comment-15352216
 ] 

Erick Erickson commented on SOLR-7191:
--

Noble:

A few comments:

> the stress setup I have is sailing right through the parts that were dying 
> last week, so this is looking good.

> I also tried back-porting this to 5x and it seems to be working equally well 
> there

> There's a comment in CoreContainer:
   // OK to limit the size of the executor in zk mode as cores are loaded in 
order.
   // This assumes replicaCount is less than coreLoadThreadCount?
I didn't read it before I started testing, so I didn't know enough to be 
scared... I'm starting 400 cores in each JVM with 10 coreLoadThreads so does 
the fact that the loading is in order preclude fewer threads than cores being a 
problem? I also experimented with 3 threads (4 shards, 4 replicas each) and saw 
no problems.




> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-27 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352165#comment-15352165
 ] 

Mark Miller commented on SOLR-9076:
---

I took the .hack test classes out to rule those out as a source. I still see 
the same strange issues around missing netty classes. Which is odd, because 
this drove moving to Netty 4 as well, so why does it want Netty 3 classes - 
does it have conflicting Netty reqs? We really want to avoid bringing in more 
than one version if we can help it, even for tests. But then it seems you got 
past that, so I'm not sure why I still saw class not found problems with Netty 
even with the patch.

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9076-fixnetty.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+124) - Build # 985 - Still Failing!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/985/
Java: 32bit/jdk-9-ea+124 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'CY val' for path 'response/params/y/c' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":0, "params":{"x":{ "a":"A val", "b":"B 
val", "":{"v":0},  from server:  
http://127.0.0.1:40104/_lnu/k/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val' for path 
'response/params/y/c' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":0,
"params":{"x":{
"a":"A val",
"b":"B val",
"":{"v":0},  from server:  http://127.0.0.1:40104/_lnu/k/collection1
at 
__randomizedtesting.SeedInfo.seed([6CE5C586258CDA34:E4B1FA5C8B70B7CC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:159)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 226 - Failure!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/226/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'P val' for path 'response/params/y/p' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":1, "params":{   "x":{ "a":"A val", 
"b":"B val", "":{"v":0}},   "y":{ "c":"CY val", 
"b":"BY val", "i":20, "d":[   "val 1",   "val 
2"], "":{"v":0},  from server:  http://127.0.0.1:34830/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'P val' for path 
'response/params/y/p' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":1,
"params":{
  "x":{
"a":"A val",
"b":"B val",
"":{"v":0}},
  "y":{
"c":"CY val",
"b":"BY val",
"i":20,
"d":[
  "val 1",
  "val 2"],
"":{"v":0},  from server:  http://127.0.0.1:34830/collection1
at 
__randomizedtesting.SeedInfo.seed([8640E0B47D103F2F:E14DF6ED3EC52D7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:215)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-27 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15352114#comment-15352114
 ] 

Gregory Chanan commented on SOLR-9076:
--

Saw this on another machine:

{code}
   [junit4]> Throwable #1: java.io.IOException: Failed on local exception: 
java.io.IOException: Broken pipe; Host Details : local host is: 
"ubuntu14-ec2-beefy-slave-03a7.vpc.cloudera.com/172.26.18.223"; destination 
host is: "ubuntu14-ec2-beefy-slave-03a7.vpc.cloudera.com":53094; 
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([F57BFC0B596FCED0:FB29480558F9FCDF]:0)
   [junit4]>at 
org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:776)
   [junit4]>at org.apache.hadoop.ipc.Client.call(Client.java:1479)
   [junit4]>at org.apache.hadoop.ipc.Client.call(Client.java:1412)
   [junit4]>at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
   [junit4]>at com.sun.proxy.$Proxy112.getClusterMetrics(Unknown 
Source)
   [junit4]>at 
org.apache.hadoop.yarn.api.impl.pb.client.ApplicationClientProtocolPBClientImpl.getClusterMetrics(ApplicationClientProtocolPBClientImpl.java:206)
   [junit4]>at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:191)
   [junit4]>at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
   [junit4]>at com.sun.proxy.$Proxy113.getClusterMetrics(Unknown 
Source)
   [junit4]>at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.getYarnClusterMetrics(YarnClientImpl.java:487)
   [junit4]>at 
org.apache.hadoop.mapred.ResourceMgrDelegate.getClusterMetrics(ResourceMgrDelegate.java:151)
   [junit4]>at 
org.apache.hadoop.mapred.YARNRunner.getClusterMetrics(YARNRunner.java:179)
   [junit4]>at 
org.apache.hadoop.mapreduce.Cluster.getClusterStatus(Cluster.java:247)
   [junit4]>at 
org.apache.hadoop.mapred.JobClient$3.run(JobClient.java:748)
   [junit4]>at 
org.apache.hadoop.mapred.JobClient$3.run(JobClient.java:746)
   [junit4]>at java.security.AccessController.doPrivileged(Native 
Method)
   [junit4]>at javax.security.auth.Subject.doAs(Subject.java:422)
   [junit4]>at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
   [junit4]>at 
org.apache.hadoop.mapred.JobClient.getClusterStatus(JobClient.java:746)
   [junit4]>at 
org.apache.solr.hadoop.MapReduceIndexerTool.run(MapReduceIndexerTool.java:642)
   [junit4]>at 
org.apache.solr.hadoop.MapReduceIndexerTool.run(MapReduceIndexerTool.java:605)
   [junit4]>at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   [junit4]>at 
org.apache.solr.hadoop.MorphlineBasicMiniMRTest.mrRun(MorphlineBasicMiniMRTest.java:364)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]> Caused by: java.io.IOException: Broken pipe
   [junit4]>at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
   [junit4]>at 
sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
   [junit4]>at 
sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
   [junit4]>at sun.nio.ch.IOUtil.write(IOUtil.java:65)
   [junit4]>at 
sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
   [junit4]>at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
   [junit4]>at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
   [junit4]>at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
   [junit4]>at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
   [junit4]>at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
   [junit4]>at 
java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
   [junit4]>at 
java.io.DataOutputStream.flush(DataOutputStream.java:123)
   [junit4]>at 
org.apache.hadoop.ipc.Client$Connection$3.run(Client.java:1043)
   [junit4]>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   [junit4]>at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
   [junit4]>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
   [junit4]>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
   [junit4]>... 1 more
{code}

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
> 

[jira] [Updated] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-27 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-9076:
-
Attachment: SOLR-9076-fixnetty.patch

Here's a patch that adds the netty dependency.  I'm still seeing test failures 
locally, not sure if they are a product of my environment or not yet.

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9076-fixnetty.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9185) Solr's "Lucene"/standard query parser should not split on whitespace before sending terms to analysis

2016-06-27 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-9185:
-
Attachment: SOLR-9185.patch

Another WIP patch.  Progress: parser generates (with {{ant javacc}}), compiles 
(after first applying the patch from LUCENE-2605 and regenerating), and most 
tests pass with the default split-on-whitespace option (i.e.: *true* - preserve 
old behavior).  Failing tests (haven't investigated yet):

* {{TestSolrQueryParser.testComments()}}
* {{TestPostingsSolrHighlighter}}: {{testDefaultSummary()}} and 
{{testEmptySnippet()}}

> Solr's "Lucene"/standard query parser should not split on whitespace before 
> sending terms to analysis
> -
>
> Key: SOLR-9185
> URL: https://issues.apache.org/jira/browse/SOLR-9185
> Project: Solr
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Attachments: SOLR-9185.patch, SOLR-9185.patch
>
>
> Copied from LUCENE-2605:
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> n-gram analysis
> shingles
> synonyms (especially multi-word for whitespace-separated languages)
> languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-2605) queryparser parses on whitespace

2016-06-27 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-2605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-2605:
---
Attachment: LUCENE-2605.patch

Patch adds lucene-test-framework files missing from last version of the patch.  
Also adds a CHANGES entry.

I plan on committing in a couple days if there are no objections.

> queryparser parses on whitespace
> 
>
> Key: LUCENE-2605
> URL: https://issues.apache.org/jira/browse/LUCENE-2605
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Robert Muir
>Assignee: Steve Rowe
> Attachments: LUCENE-2605.patch, LUCENE-2605.patch, LUCENE-2605.patch, 
> LUCENE-2605.patch, LUCENE-2605.patch
>
>
> The queryparser parses input on whitespace, and sends each whitespace 
> separated term to its own independent token stream.
> This breaks the following at query-time, because they can't see across 
> whitespace boundaries:
> * n-gram analysis
> * shingles 
> * synonyms (especially multi-word for whitespace-separated languages)
> * languages where a 'word' can contain whitespace (e.g. vietnamese)
> Its also rather unexpected, as users think their 
> charfilters/tokenizers/tokenfilters will do the same thing at index and 
> querytime, but
> in many cases they can't. Instead, preferably the queryparser would parse 
> around only real 'operators'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9258:
-
Description: 
This ticket describes a framework for *optimizing*, *storing* and *deploying* 
AI models within the Streaming Expression framework.

*Optimizing*
[~caomanhdat], has contributed SOLR-9252 which provides *Streaming Expressions* 
for both feature selection and optimization of a logistic regression text 
classifier. SOLR-9252 also provides a great working example of *optimization* 
of a machine learning model using an in-place parallel iterative algorithm.

*Storing*

Both features and optimized models can be stored in SolrCloud collections using 
the update expression. Using [~caomanhdat]'s example in SOLR-9252, the pseudo 
code for storing features would be:

{code}
update(featuresCollection, 
   featuresSelection(collection1, 
id="myFeatures", 
q="*:*",  
field="tv_text", 
outcome="out_i", 
positiveLabel=1, 
numTerms=100))
{code}  

The id field can be added to the featureSelection expression so that features 
can be later retrieved from the collection it's stored in.

*Deploying*

With the introduction of the topic() expression, SolrCloud can be treated as a 
distributed message queue. This messaging capability can  be used to deploy 
models and process data through the models.

To implement this approach a classify() function can be created that uses a 
topic() function to return both the model and the data to be classified:

The pseudo code looks like this:

{code}
classify(topic(models, q="modelID", fl="features, weights"),
 topic(emails, q="*:*", fl="id, body", rows="500", version="3232323"))
{code}


In the example above the classify() function uses the topic() function to 
retrieve the model. Each time there is an update to the model in the index, the 
topic() expression will automatically read the new model.

The topic function() is also used to pull in the data set that is being 
classified. Notice the *version* parameter. This will be added to the topic 
function to support pulling results from a specific version number (jira ticket 
to follow).

With this approach both the model and the data to process through the model are 
treated as messages in a message queue.

The daemon function can be used to send the classify function to Solr where it 
will be run in the background. The pseudo code looks like this:

{code}
daemon(...,
 update(classifiedEmails, 
 classify(topic(models, q="modelID", fl="features, weights"),
  topic(emails, q="*:*", fl="id, fl, body", rows="500", 
version="3232323"
{code}

In this scenario the daemon will run the classify function repeatedly in the 
background. With each run the topic() functions will re-pull the model if the 
model has been updated. It will also pull a new set of emails to be classified. 
The classified emails can be stored in another SolrCloud collection using the 
update() function.

Using this approach emails can be classified in batches. The daemon can 
continue to run even after all all the emails have been classified. New emails 
added to the emails collections will then be automatically classified when they 
enter the index.

Classification can be done in parallel once SOLR-9240 is completed. This will 
allow topic() results to be partitioned across worker nodes so they can be 
processed in parallel. The pseudo code for this is:

{code}
parallel(workerCollection, worker="20", ...,
 daemon(...,
   update(classifiedEmails, 
   classify(topic(models, q="modelID", fl="features, 
weights", partitionKeys="none"),
topic(emails, q="*:*", fl="id, fl, body", 
rows="500", version="3232323", partitionKeys="id")
{code}

The code above sends a daemon to 20 workers, which will each classify a 
partition of records pulled by the topic() function.

*AI based alerting*

If the *version* parameter is not supplied to the topic stream it will stream 
only new content from the topic, rather then starting from an older version 
number.

In this scenario the topic function behaves like an alert. Pseudo code for 
alerts look like this:

{code}
daemon(...,
 alert(..., 
 classify(topic(models, q="modelID", fl="features, weights"),
  topic(emails, q="*:*", fl="id, fl, body", rows="500"
{code}

In the example above an alert() function wraps the classify() function and 
takes actions based on the classification of documents. Developers can build 
there own alert functions using the Streaming API and plug them in to provide 
custom actions.












 






  was:
This ticket describes a framework for 

[jira] [Updated] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9258:
-
Description: 
This ticket describes a framework for *optimizing*, *storing* and *deploying* 
AI models within the Streaming Expression framework.

*Optimizing*
[~caomanhdat], has contributed SOLR-9252 which provides *Streaming Expressions* 
for both feature selection and optimization of a logistic regression text 
classifier. SOLR-9252 also provides a great working example of *optimization* 
of a machine learning model using an in-place parallel iterative algorithm.

*Storing*

Both features and optimized models can be stored in SolrCloud collections using 
the update expression. Using [~caomanhdat]'s example in SOLR-9252, the pseudo 
code for storing features would be:

{code}
update(featuresCollection, 
   featuresSelection(collection1, 
id="myFeatures", 
q="*:*",  
field="tv_text", 
outcome="out_i", 
positiveLabel=1, 
numTerms=100))
{code}  

The id field can be added to the featureSelection expression so that features 
can be later retrieved from the collection it's stored in.

*Deploying*

With the introduction of the topic() expression, SolrCloud can be treated as a 
distributed message queue. This messaging capability can  be used to deploy 
models and process data through the models.

To implement this approach a classify() function can be created that uses a 
topic() function to return both the model and the data to be classified:

The pseudo code looks like this:

{code}
classify(topic(models, q="modelID", fl="features, weights"),
 topic(emails, q="*:*", fl="id, body", rows="500", version="3232323"))
{code}


In the example above the classify() function uses the topic() function to 
retrieve the model. Each time there is an updated to the model in the index, 
the topic() expression will automatically read the new model.

The topic function() is also used to pull in the data set that is being 
classified. Notice the *version* parameter. This will be added to the topic 
function to support pulling results from a specific version number (jira ticket 
to follow).

With this approach both the model and the data to process through the model are 
treated as messages in a message queue.

The daemon function can be used to send the classify function to Solr where it 
will be run in the background. The pseudo code looks like this:

{code}
daemon(...,
 update(classifiedEmails, 
 classify(topic(models, q="modelID", fl="features, weights"),
  topic(emails, q="*:*", fl="id, fl, body", rows="500", 
version="3232323"
{code}

In this scenario the daemon will run the classify function repeatedly in the 
background. With each run the topic() functions will re-pull the model if the 
model has been updated. It will also pull a new set of emails to be classified. 
The classified emails can be stored in another SolrCloud collection using the 
update() function.

Using this approach emails can be classified in batches. The daemon can 
continue to run even after all all the emails have been classified. New emails 
added to the emails collections will then be automatically classified when they 
enter the index.

Classification can be done in parallel once SOLR-9240 is completed. This will 
allow topic() results to be partitioned across worker nodes so they can be 
processed in parallel. The pseudo code for this is:

{code}
parallel(workerCollection, worker="20", ...,
 daemon(...,
   update(classifiedEmails, 
   classify(topic(models, q="modelID", fl="features, 
weights", partitionKeys="none"),
topic(emails, q="*:*", fl="id, fl, body", 
rows="500", version="3232323", partitionKeys="id")
{code}

The code above sends a daemon to 20 workers, which will each classify a 
partition of records pulled by the topic() function.

*AI based alerting*

If the *version* parameter is not supplied to the topic stream it will stream 
only new content from the topic, rather then starting from an older version 
number.

In this scenario the topic function behaves like an alert. Pseudo code for 
alerts look like this:

{code}
daemon(...,
 alert(..., 
 classify(topic(models, q="modelID", fl="features, weights"),
  topic(emails, q="*:*", fl="id, fl, body", rows="500"
{code}

In the example above an alert() function wraps the classify() function and 
takes actions based on the classification of documents. Developers can build 
there own alert functions using the Streaming API and plug them in to provide 
custom actions.












 






  was:
This ticket describes a framework for 

[jira] [Updated] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9258:
-
Description: 
This ticket describes a framework for *optimizing*, *storing* and *deploying* 
AI models within the Streaming Expression framework.

*Optimizing*
[~caomanhdat], has contributed SOLR-9252 which provides *Streaming Expressions* 
for both feature selection and optimization of a logistic regression text 
classifier. SOLR-9252 also provides a great working example of *optimization* 
of a machine learning model using an in-place parallel iterative algorithm.

*Storing*

Both features and optimized models can be stored in SolrCloud collections using 
the update expression. Using [~caomanhdat]'s example in SOLR-9252, the pseudo 
code for storing features would be:

{code}
update(featuresCollection, 
   featuresSelection(collection1, 
id="myFeatures", 
q="*:*",  
field="tv_text", 
outcome="out_i", 
positiveLabel=1, 
numTerms=100))
{code}  

The id field can be added to the featureSelection expression so that features 
can be later retrieved from the collection it's stored in.

*Deploying*

With the introduction of the topic() expression, SolrCloud can be treated as a 
distributed message queue. This messaging capability can  be used to deploy 
models and process data through the models.

To implement this approach a classify() function can be created that uses a 
topic() function to return both the model and the data to be classified:

The pseudo code looks like this:

{code}
classify(topic(models, q="modelID", fl="features, weights"),
 topic(emails, q="*:*", fl="id, body", rows="500", version="3232323"))
{code}


In the example above the classify() function uses the topic() function to 
retrieve the model. Each time there is an updated to the model in the index, 
the topic() expression will automatically read the new model.

The topic function() is also used to pull in the data set that is being 
classified. Notice the *version* parameter. This will be added to the topic 
function to support pulling results from a specific version number (jira ticket 
to follow).

With this approach both the model and the data to process through the model are 
treated as messages in a message queue.

The daemon function can be used to send the classify function to Solr where it 
will be run in the background. The pseudo code looks like this:

{code}
daemon(...,
 update(classifiedEmails, 
 classify(topic(models, q="modelID", fl="features, weights"),
  topic(emails, q="*:*", fl="id, fl, body", rows="500", 
version="3232323"
{code}

In this scenario the daemon will run the classify function repeatedly in the 
background. With each run the topic() functions will re-pull the model if the 
model has been updated. It will also pull a new set of emails to be classified. 
The classified emails can be stored in another SolrCloud collection using the 
update() function.

Using this approach emails can be classified in batches. The daemon can 
continue to run even after all all the emails have been classified. New emails 
added to the emails collections will then be automatically classified when they 
enter the index.

Classification can be done in parallel once SOLR-9240 is completed. This will 
allow topic() results to be partitioned across worker nodes so they can be 
processed in parallel. The pseudo code for this is:

{code}
parallel(workerCollection, worker="20", ...,
 daemon(...,
   update(classifiedEmails, 
   classify(topic(models, q="modelID", fl="features, 
weights", partitionKeys="none"),
topic(emails, q="*:*", fl="id, fl, body", 
rows="500", version="3232323", partitionKeys="id")
{code}

The code above sends a daemon to 20 workers, which will each classify a 
partition of records pulled by the topic() function.

*AI based alerting*

If the *version* parameter is not supplied to the topic stream it will stream 
only new content from the topic, rather then starting from an older version 
number.

In this scenario the topic function behaves like an alert. Pseudo code for 
alerts look like this:

{code}
daemon(...,
 alert(..., 
 classify(topic(models, q="modelID", fl="features, weights"),
  topic(emails, q="*:*", fl="id, fl, body", rows="500"
{code}

In the example above an alert() function wraps the classify() function and 
takes actions based on the classification of documents. Developers can build 
there own alert functions using the Streaming API and plug them in to provide 
custom actions.












 






  was:
This ticket describes 

[jira] [Updated] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9258:
-
Fix Version/s: 6.2

> Optimizing, storing and deploying AI models with Streaming Expressions
> --
>
> Key: SOLR-9258
> URL: https://issues.apache.org/jira/browse/SOLR-9258
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Fix For: 6.2
>
>
> This ticket describes a framework for *optimizing*, *storing* and *deploying* 
> AI models within the Streaming Expression framework.
> *Optimizing*
> [~caomanhdat], has contributed SOLR-9252 which provides *Streaming 
> Expressions* for both feature selection and optimization of a logistic 
> regression text classifier. SOLR-9252 also provides a great working example 
> of *optimization* of a machine learning model using an in-place parallel 
> iterative algorithm.
> *Storing*
> Both features and optimized models can be stored in SolrCloud collections 
> using the update expression. Using [~caomanhdat]'s example in SOLR-9252, the 
> pseudo code for storing features would be:
> {code}
> update(featuresCollection, 
>featuresSelection(collection1, id="myFeatures", q="*:*",  
> field="tv_text", outcome="out_i", positiveLabel=1, numTerms=100))
> {code}  
> The id field can be added to the featureSelection expression so that features 
> can be later retrieved from the collection it's stored in.
> *Deploying*
> With the introduction of the topic() expression, SolrCloud can be treated as 
> a distributed message queue. This messaging capability can  be used to deploy 
> models and process data through the models.
> To implement this approach a classify() function can be created that uses a 
> topic() function to return both the model and the data to be classified:
> The pseudo code looks like this:
> {code}
> classify(topic(models, q="modelID", fl="features, weights"),
>  topic(emails, q="*:*", fl="id, body", rows="500", version="3232323"))
> {code}
> In the example above the classify() function uses the topic() function to 
> retrieve the model. Each time there is an updated to the model in the index, 
> the topic() expression will automatically read the new model.
> The topic function() is also used to pull in the data set that is being 
> classified. Notice the *version* parameter. This will be added to the topic 
> function to support pulling results from a specific version number (jira 
> ticket to follow).
> With this approach both the model and the data to process through the model 
> are treated as messages in a message queue.
> The daemon function can be used to send the classify function to Solr where 
> it will be run in the background. The pseudo code looks like this:
> {code}
> daemon(...,
>  update(classifiedEmails, 
>  classify(topic(models, q="modelID", fl="features, weights"),
>   topic(emails, q="*:*", fl="id, fl, body", 
> rows="500", version="3232323"
> {code}
> In this scenario the daemon will run the classify function repeatedly in the 
> background. With each run the topic() functions will re-pull the model if the 
> model has been updated. It will also pull a new set of emails to be 
> classified. The classified emails can be stored in another SolrCloud 
> collection using the update() function.
> Using this approach emails can be classified in batches. The daemon can 
> continue to run even after all all the emails have been classified. New 
> emails added to the emails collections will then be automatically classified 
> when they enter the index.
> Classification can be done in parallel once SOLR-9240 is completed. This will 
> allow topic() results to be partitioned across worker nodes so they can be 
> processed in parallel. The pseudo code for this is:
> {code}
> parallel(workerCollection, worker="20", ...,
>  daemon(...,
>update(classifiedEmails, 
>classify(topic(models, q="modelID", fl="features, 
> weights", partitionKeys="none"),
> topic(emails, q="*:*", fl="id, fl, body", 
> rows="500", version="3232323", partitionKeys="id")
> {code}
> The code above sends a daemon to 20 workers, which will each classify a 
> partition of records pulled by the topic() function.
> *AI based alerting*
> If the *version* parameter is not supplied to the topic stream it will stream 
> only new content from the topic, rather then starting from an older version 
> number.
> In this scenario the topic function behaves like an alert. Pseudo code for 
> alerts look like this:
> {code}
> daemon(...,
>  alert(..., 
>  classify(topic(models, q="modelID", 

[jira] [Updated] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9258:
-
Description: 
This ticket describes a framework for *optimizing*, *storing* and *deploying* 
AI models within the Streaming Expression framework.

*Optimizing*
[~caomanhdat], has contributed SOLR-9252 which provides *Streaming Expressions* 
for both feature selection and optimization of a logistic regression text 
classifier. SOLR-9252 also provides a great working example of *optimization* 
of a machine learning model using an in-place parallel iterative algorithm.

*Storing*

Both features and optimized models can be stored in SolrCloud collections using 
the update expression. Using [~caomanhdat]'s example in SOLR-9252, the pseudo 
code for storing features would be:

{code}
update(featuresCollection, 
   featuresSelection(collection1, id="myFeatures", q="*:*",  
field="tv_text", outcome="out_i", positiveLabel=1, numTerms=100))
{code}  

The id field can be added to the featureSelection expression so that features 
can be later retrieved from the collection it's stored in.

*Deploying*

With the introduction of the topic() expression, SolrCloud can be treated as a 
distributed message queue. This messaging capability can  be used to deploy 
models and process data through the models.

To implement this approach a classify() function can be created that uses a 
topic() function to return both the model and the data to be classified:

The pseudo code looks like this:

{code}
classify(topic(models, q="modelID", fl="features, weights"),
 topic(emails, q="*:*", fl="id, body", rows="500", version="3232323"))
{code}


In the example above the classify() function uses the topic() function to 
retrieve the model. Each time there is an updated to the model in the index, 
the topic() expression will automatically read the new model.

The topic function() is also used to pull in the data set that is being 
classified. Notice the *version* parameter. This will be added to the topic 
function to support pulling results from a specific version number (jira ticket 
to follow).

With this approach both the model and the data to process through the model are 
treated as messages in a message queue.

The daemon function can be used to send the classify function to Solr where it 
will be run in the background. The pseudo code looks like this:

{code}
daemon(...,
 update(classifiedEmails, 
 classify(topic(models, q="modelID", fl="features, weights"),
  topic(emails, q="*:*", fl="id, fl, body", rows="500", 
version="3232323"
{code}

In this scenario the daemon will run the classify function repeatedly in the 
background. With each run the topic() functions will re-pull the model if the 
model has been updated. It will also pull a new set of emails to be classified. 
The classified emails can be stored in another SolrCloud collection using the 
update() function.

Using this approach emails can be classified in batches. The daemon can 
continue to run even after all all the emails have been classified. New emails 
added to the emails collections will then be automatically classified when they 
enter the index.

Classification can be done in parallel once SOLR-9240 is completed. This will 
allow topic() results to be partitioned across worker nodes so they can be 
processed in parallel. The pseudo code for this is:

{code}
parallel(workerCollection, worker="20", ...,
 daemon(...,
   update(classifiedEmails, 
   classify(topic(models, q="modelID", fl="features, 
weights", partitionKeys="none"),
topic(emails, q="*:*", fl="id, fl, body", 
rows="500", version="3232323", partitionKeys="id")
{code}

The code above sends a daemon to 20 workers, which will each classify a 
partition of records pulled by the topic() function.

*AI based alerting*

If the *version* parameter is not supplied to the topic stream it will stream 
only new content from the topic, rather then starting from an older version 
number.

In this scenario the topic function behaves like an alert. Pseudo code for 
alerts look like this:

{code}
daemon(...,
 alert(..., 
 classify(topic(models, q="modelID", fl="features, weights"),
  topic(emails, q="*:*", fl="id, fl, body", rows="500"
{code}

In the example above an alert() function wraps the classify() function and 
takes actions based on the classification of documents. Developers can build 
there own alert functions using the Streaming API and plug them in to provide 
custom actions.












 






  was:
This ticket describes a framework for *optimizing*, *storing* and *deploying* 
AI models within the Streaming Expression framework.

*Optimizing*
[~caomanhdat], has contributed SOLR-9252 which provides *Streaming 

[jira] [Updated] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9258:
-
Description: 
This ticket describes a framework for *optimizing*, *storing* and *deploying* 
AI models within the Streaming Expression framework.

*Optimizing*
[~caomanhdat], has contributed SOLR-9252 which provides *Streaming Expressions* 
for both feature selection and optimization of a logistic regression text 
classifier. SOLR-9252 also provides a great working example of *optimization* 
of a machine learning model using an in-place parallel iterative algorithm.

*Storing*

Both features and optimized models can be stored in SolrCloud collections using 
the update expression. Using [~caomanhdat]'s example in SOLR-9252, the pseudo 
code for storing features would be:

{code}
update(featuresCollection, 
   featuresSelection(collection1, id="myFeatures", q="*:*",  
field="tv_text", outcome="out_i", positiveLabel=1, numTerms=100))
{code}  

The id field can be added to the featureSelection expression so that features 
can be later retrieved from the collection it's stored in.

*Deploying*

With the introduction of the topic() expression, SolrCloud can be treated as a 
distributed message queue. This messaging capability can  be used to deploy 
models and process data through the models.

To implement this approach a classify() function can be created that uses a 
topic() function to return both the model and the data to be classified:

The pseudo code looks like this:

{code}
classify(topic(models, q="modelID", fl="features, weights"),
 topic(emails, q="*:*", fl="id, body", rows="500", version="3232323"))
{code}

In the example above the classify() function uses the topic() function to 
retrieve the model. Each time there is an updated to the model in the index, 
the topic() expression will automatically read the new model.

The topic function() is also used to pull in the data set that is being 
classified. Notice the *version* parameter. This will be added to the topic 
function to support pulling results from a specific version number (jira ticket 
to follow).

The daemon function can be used to send the classify function to Solr where it 
will be run in the background. The pseudo code looks like this:

{code}
daemon(...,
 update(classifiedEmails, 
 classify(topic(models, q="modelID", fl="features, weights"),
  topic(emails, q="*:*", fl="id, fl, body", rows="500", 
version="3232323"
{code}

In this scenario the daemon will run the classify function repeatedly in the 
background. With each run the topic() functions will re-pull the model if the 
model has been updated. It will also pull a new set of emails to be classified. 
The classified emails can be stored in another SolrCloud collection using the 
update() function.

Using this approach emails can be classified in batches. The daemon can 
continue to run even after all all the emails have been classified. New emails 
added to the emails collections will then be automatically classified when they 
enter the index.

Classification can be done in parallel once SOLR-9240 is completed. This will 
allow topic() results to be partitioned across worker nodes so they can be 
processed in parallel. The pseudo code for this is:

{code}
parallel(workerCollection, worker="20", ...,
 daemon(...,
   update(classifiedEmails, 
   classify(topic(models, q="modelID", fl="features, 
weights", partitionKeys="none"),
topic(emails, q="*:*", fl="id, fl, body", 
rows="500", version="3232323", partitionKeys="id")
{code}

The code above sends a daemon to 20 workers, which will each classify a 
partition of records pulled by the topic() function.

*AI based alerting*

If the *version* parameter is not supplied to the topic stream it will stream 
only new content from the topic, rather then starting from an older version 
number.

In this scenario the topic function behaves like an alert. Pseudo code for 
alerts look like this:

{code}
daemon(...,
 alert(..., 
 classify(topic(models, q="modelID", fl="features, weights"),
  topic(emails, q="*:*", fl="id, fl, body", rows="500"
{code}

In the example above an alert() function wraps the classify() function and 
takes actions based on the classification of documents. Developers can build 
there own alert functions using the Streaming API and plug them in to provide 
custom actions.












 






  was:
This ticket describes a framework for *optimizing*, *storing* and *deploying* 
AI models within the Streaming Expression framework.

*Optimizing*
[~caomanhdat], has contributed SOLR-9252 which provides *Streaming Expressions* 
for both feature selection and optimization of a logistic regression text 
classifier. SOLR-9252 also provides a 

[jira] [Updated] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9258:
-
Description: 
This ticket describes a framework for *optimizing*, *storing* and *deploying* 
AI models within the Streaming Expression framework.

*Optimizing*
[~caomanhdat], has contributed SOLR-9252 which provides *Streaming Expressions* 
for both feature selection and optimization of a logistic regression text 
classifier. SOLR-9252 also provides a great working example of *optimization* 
of a machine learning model using an in-place parallel iterative algorithm.

*Storing*

Both features and optimized models can be stored in SolrCloud collections using 
the update expression. Using [~caomanhdat]'s example in SOLR-9252, the pseudo 
code for storing features would be:

{code}
update(featuresCollection, 
  featuresSelection(collection1, id="myFeatures", q="*:*",  
field="tv_text", outcome="out_i", positiveLabel=1, numTerms=100))
{code}  

The id field can be added to the featureSelection expression so that features 
can be later retrieved from the collection it's stored in.

*Deploying*

With the introduction of the topic() expression, SolrCloud can be treated as a 
distributed message queue. This messaging capability can  be used to deploy 
models and process data through the models.

To implement this approach a classify() function can be created that uses a 
topic() function to return both the model and the data to be classified:

The pseudo code looks like this:

{code}
classify(topic(models, q="modelID", fl="features, weights"),
 topic(emails, q="*:*", fl="id, body", rows="500", 
version="3232323"))
{code}

In the example above the classify() function uses the topic() function to 
retrieve the model. Each time there is an updated to the model in the index, 
the topic() expression will automatically read the new model.

The topic function() is also used to pull in the data set that is being 
classified. Notice the *version* parameter. This will be added to the topic 
function to support pulling results from a specific version number (jira ticket 
to follow).

The daemon function can be used to send the classify function to Solr where it 
will be run in the background. The pseudo code looks like this:

{code}
daemon(...,
 update(classifiedEmails, 
 classify(topic(models, q="modelID", fl="features, 
weights"),
  topic(emails, q="*:*", fl="id, fl, body", 
rows="500", version="3232323"
{code}

In this scenario the daemon will run the classify function repeatedly in the 
background. With each run the topic() functions will re-pull the model if the 
model has been updated. It will also pull a new set of emails to be classified. 
The classified emails can be stored in another SolrCloud collection using the 
update() function.

Using this approach emails can be classified in batches. The daemon can 
continue to run even after all all the emails have been classified. New emails 
added to the emails collections will then be automatically classified when they 
enter the index.

Classification can be done in parallel once SOLR-9240 is completed. This will 
allow topic() results to be partitioned across worker nodes so they can be 
processed in parallel. The pseudo code for this is:

{code}
parallel(workerCollection, worker="20", ...,
 daemon(...,
   update(classifiedEmails, 
   classify(topic(models, q="modelID", 
fl="features, weights", partitionKeys="none"),
topic(emails, q="*:*", 
fl="id, fl, body", rows="500", version="3232323", partitionKeys="id")
{code}

The code above sends a daemon to 20 workers, which will each classify a 
partition of records pulled by the topic() function.

*AI based alerting*

If the *version* parameter is not supplied to the topic stream it will stream 
only new content from the topic, rather then starting from an older version 
number.

In this scenario the topic function behaves like an alert. Pseudo code for 
alerts look like this:

{code}
daemon(...,
 alert(..., 
 classify(topic(models, q="modelID", fl="features, 
weights"),
  topic(emails, q="*:*", fl="id, fl, body", 
rows="500"
{code}

In the example above an alert() function wraps the classify() function and 
takes actions based on the classification of documents. Developers can build 
there own alert functions using the Streaming API and plug them in to provide 
custom actions.












 






  was:
This ticket describes a framework for *optimizing*, *storing* and *deploying* 
AI models within the Streaming Expression framework.

*Optimizing*
[~caomanhdat], has contributed SOLR-9252 which provides *Streaming Expressions* 
for both feature 

[jira] [Updated] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9258:
-
Description: 
This ticket describes a framework for *optimizing*, *storing* and *deploying* 
AI models within the Streaming Expression framework.

*Optimizing*
[~caomanhdat], has contributed SOLR-9252 which provides *Streaming Expressions* 
for both feature selection and optimization of a logistic regression text 
classifier. SOLR-9252 also provides a great working example of *optimization* 
of a machine learning model using an in-place parallel iterative algorithm.

*Storing*

Both features and optimized models can be stored in SolrCloud collections using 
the update expression. Using [~caomanhdat]'s example in SOLR-9252, the pseudo 
code for storing features would be:

{code}
update(featuresCollection, 
featuresSelection(collection1, id="myFeatures", q="*:*",  
field="tv_text", outcome="out_i", positiveLabel=1, numTerms=100))
{code}  

The id field can be added to the featureSelection expression so that features 
can be later retrieved from the collection it's stored in.

*Deploying*

With the introduction of the topic() expression, SolrCloud can be treated as a 
distributed message queue. This messaging capability can  be used to deploy 
models and process data through the models.

To implement this approach a classify() function can be created that uses a 
topic() function to return both the model and the data to be classified:

The pseudo code looks like this:

{code}
classify(topic(models, q="modelID", fl="features, weights"),
 topic(emails, q="*:*", fl="id, body", rows="500", 
version="3232323"))
{code}

In the example above the classify() function uses the topic() function to 
retrieve the model. Each time there is an updated to the model in the index, 
the topic() expression will automatically read the new model.

The topic function() is also used to pull in the data set that is being 
classified. Notice the *version* parameter. This will be added to the topic 
function to support pulling results from a specific version number (jira ticket 
to follow).

The daemon function can be used to send the classify function to Solr where it 
will be run in the background. The pseudo code looks like this:

{code}
daemon(...,
 update(classifiedEmails, 
 classify(topic(models, q="modelID", fl="features, 
weights"),
  topic(emails, q="*:*", fl="id, fl, body", 
rows="500", version="3232323"
{code}

In this scenario the daemon will run the classify function repeatedly in the 
background. With each run the topic() functions will re-pull the model if the 
model has been updated. It will also pull a new set of emails to be classified. 
The classified emails can be stored in another SolrCloud collection using the 
update() function.

Using this approach emails can be classified in batches. The daemon can 
continue to run even after all all the emails have been classified. New emails 
added to the emails collections will then be automatically classified when they 
enter the index.

Classification can be done in parallel once SOLR-9240 is completed. This will 
allow topic() results to be partitioned across worker nodes so they can be 
processed in parallel. The pseudo code for this is:

{code}
parallel(workerCollection, worker="20", ...,
 daemon(...,
   update(classifiedEmails, 
   classify(topic(models, q="modelID", 
fl="features, weights", partitionKeys="none"),
topic(emails, q="*:*", 
fl="id, fl, body", rows="500", version="3232323", partitionKeys="id")
{code}

The code above sends a daemon to 20 workers, which will each classify a 
partition of records pulled by the topic() function.

*AI based alerting*

If the *version* parameter is not supplied to the topic stream it will stream 
only new content from the topic, rather then starting from an older version 
number.

In this scenario the topic function behaves like an alert. Pseudo code for 
alerts look like this:

{code}
daemon(...,
 alert(..., 
 classify(topic(models, q="modelID", fl="features, 
weights"),
  topic(emails, q="*:*", fl="id, fl, body", 
rows="500"
{code}

In the example above an alert() function wraps the classify() function and 
takes actions based on the classification of documents. Developers can build 
there own alert functions using the Streaming API and plug them in to provide 
custom actions.












 






  was:
This ticket describes a framework for Optimizing, storing and deploying AI 
models within the Streaming Expression framework.

*Optimizing*
[~caomanhdat], has contributed SOLR-9252 which provides *Streaming Expressions* 
for both feature 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 676 - Still Failing!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/676/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Didn't see all replicas for shard shard1 in c8n_1x2 come up within 9 ms! 
ClusterState: {   "collMinRf_1x3":{ "replicationFactor":"3", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"collMinRf_1x3_shard1_replica3", 
"base_url":"http://127.0.0.1:50911/p_qj/t;, 
"node_name":"127.0.0.1:50911_p_qj%2Ft", "state":"active"},  
 "core_node2":{ "core":"collMinRf_1x3_shard1_replica1", 
"base_url":"http://127.0.0.1:41670/p_qj/t;, 
"node_name":"127.0.0.1:41670_p_qj%2Ft", "state":"active",   
  "leader":"true"},   "core_node3":{ 
"core":"collMinRf_1x3_shard1_replica2", 
"base_url":"http://127.0.0.1:61814/p_qj/t;, 
"node_name":"127.0.0.1:61814_p_qj%2Ft", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false"},   "collection1":{ "replicationFactor":"1", 
"shards":{   "shard1":{ "range":"8000-", 
"state":"active", "replicas":{"core_node2":{ 
"core":"collection1", "base_url":"http://127.0.0.1:39427/p_qj/t;,   
  "node_name":"127.0.0.1:39427_p_qj%2Ft", "state":"active", 
"leader":"true"}}},   "shard2":{ "range":"0-7fff",  
   "state":"active", "replicas":{   "core_node1":{  
   "core":"collection1", 
"base_url":"http://127.0.0.1:41670/p_qj/t;, 
"node_name":"127.0.0.1:41670_p_qj%2Ft", "state":"active",   
  "leader":"true"},   "core_node3":{ "core":"collection1",  
   "base_url":"http://127.0.0.1:50911/p_qj/t;, 
"node_name":"127.0.0.1:50911_p_qj%2Ft", "state":"active", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false", "autoCreated":"true"},   "control_collection":{  
   "replicationFactor":"1", "shards":{"shard1":{ 
"range":"8000-7fff", "state":"active", 
"replicas":{"core_node1":{ "core":"collection1", 
"base_url":"http://127.0.0.1:61814/p_qj/t;, 
"node_name":"127.0.0.1:61814_p_qj%2Ft", "state":"active",   
  "leader":"true", "router":{"name":"compositeId"}, 
"maxShardsPerNode":"1", "autoAddReplicas":"false", 
"autoCreated":"true"},   "c8n_1x2":{ "replicationFactor":"2", 
"shards":{"shard1":{ "range":"8000-7fff", 
"state":"active", "replicas":{   "core_node1":{ 
"core":"c8n_1x2_shard1_replica1", 
"base_url":"http://127.0.0.1:41670/p_qj/t;, 
"node_name":"127.0.0.1:41670_p_qj%2Ft", "state":"active",   
  "leader":"true"},   "core_node2":{ "state":"down",
 "base_url":"http://127.0.0.1:39427/p_qj/t;, 
"core":"c8n_1x2_shard1_replica2", 
"node_name":"127.0.0.1:39427_p_qj%2Ft", 
"router":{"name":"compositeId"}, "maxShardsPerNode":"1", 
"autoAddReplicas":"false"}}

Stack Trace:
java.lang.AssertionError: Didn't see all replicas for shard shard1 in c8n_1x2 
come up within 9 ms! ClusterState: {
  "collMinRf_1x3":{
"replicationFactor":"3",
"shards":{"shard1":{
"range":"8000-7fff",
"state":"active",
"replicas":{
  "core_node1":{
"core":"collMinRf_1x3_shard1_replica3",
"base_url":"http://127.0.0.1:50911/p_qj/t;,
"node_name":"127.0.0.1:50911_p_qj%2Ft",
"state":"active"},
  "core_node2":{
"core":"collMinRf_1x3_shard1_replica1",
"base_url":"http://127.0.0.1:41670/p_qj/t;,
"node_name":"127.0.0.1:41670_p_qj%2Ft",
"state":"active",
"leader":"true"},
  "core_node3":{
"core":"collMinRf_1x3_shard1_replica2",
"base_url":"http://127.0.0.1:61814/p_qj/t;,
"node_name":"127.0.0.1:61814_p_qj%2Ft",
"state":"active",
"router":{"name":"compositeId"},
"maxShardsPerNode":"1",
"autoAddReplicas":"false"},
  "collection1":{
"replicationFactor":"1",
"shards":{
  "shard1":{
"range":"8000-",
"state":"active",
"replicas":{"core_node2":{
"core":"collection1",
"base_url":"http://127.0.0.1:39427/p_qj/t;,
"node_name":"127.0.0.1:39427_p_qj%2Ft",
"state":"active",
"leader":"true"}}},
  "shard2":{

[jira] [Commented] (SOLR-9257) Basic Authentication - Internode Requests Fail With 401

2016-06-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351987#comment-15351987
 ] 

Martin Löper commented on SOLR-9257:


Yes exactly! Thank you for pointing that out!
I forgot to mention the blockUnknown property is set in my scenario too. I see 
no reason to not set blockUnknown property true. So this is quite important for 
the whole BasicAuthentication Plugin to be usable in production.

> Basic Authentication - Internode Requests Fail With 401
> ---
>
> Key: SOLR-9257
> URL: https://issues.apache.org/jira/browse/SOLR-9257
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 6.1
>Reporter: Martin Löper
>  Labels: authentification, pki, security, ssl
>
> I enabled SSL successfully and subsequently also turned on the 
> BasicAuthentication Plugin along with Rule-Based Authentication in SolrCloud 
> mode. This works well when there is no inter-node communication. As soon as I 
> create a collection with 2 shards, I get the following exception for every 
> access of the "/select" request handler.
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":401,
> "QTime":181,
> "params":{
>   "q":"*:*",
>   "indent":"on",
>   "wt":"json",
>   "_":"1467062257216"}},
>   "error":{
> "metadata":[
>   
> "error-class","org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException",
>   
> "root-error-class","org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException"],
> "msg":"Error from server at 
> https://myserver.xxx.corp:8983/solr/mycollection_shard2_replica1: Expected 
> mime type application/octet-stream but got text/html. \n\n http-equiv=\"Content-Type\" 
> content=\"text/html;charset=utf-8\"/>\nError 401 Unauthorized request, 
> Response code: 401\n\nHTTP ERROR 
> 401\nProblem accessing /solr/mycollection_shard2_replica1/select. 
> Reason:\nUnauthorized request, Response code: 
> 401\n\n\n",
> "code":401}}
> There are also PKIAuthenticationPlugin exceptions before the exception above:
> Exception trying to get public key from : https://myserver.xxx.corp:8983/solr
> org.noggit.JSONParser$ParseException: JSON Parse Error: char=<,position=0 
> BEFORE='<' AFTER='html>  

[jira] [Updated] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9258:
-
Description: 
This ticket describes a framework for Optimizing, storing and deploying AI 
models within the Streaming Expression framework.

*Optimizing*
[~caomanhdat], has contributed SOLR-9252 which provides *Streaming Expressions* 
for both feature selection and optimization of a logistic regression text 
classifier. SOLR-9252 also provides a great working example the *optimization* 
of a machine learning model using an in-place parallel iterative algorithm.

*Storing*

Both features and optimized model can be stored in SolrCloud collections using 
the update expression. Using [~caomanhdat]   







  was:
This ticket describes a framework for Optimizing, storing and deploying AI 
models within the Streaming Expression framework.

[~caomanhdat]




> Optimizing, storing and deploying AI models with Streaming Expressions
> --
>
> Key: SOLR-9258
> URL: https://issues.apache.org/jira/browse/SOLR-9258
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This ticket describes a framework for Optimizing, storing and deploying AI 
> models within the Streaming Expression framework.
> *Optimizing*
> [~caomanhdat], has contributed SOLR-9252 which provides *Streaming 
> Expressions* for both feature selection and optimization of a logistic 
> regression text classifier. SOLR-9252 also provides a great working example 
> the *optimization* of a machine learning model using an in-place parallel 
> iterative algorithm.
> *Storing*
> Both features and optimized model can be stored in SolrCloud collections 
> using the update expression. Using [~caomanhdat]   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9258:
-
Description: 
This ticket describes a framework for Optimizing, storing and deploying AI 
models within the Streaming Expression framework.

[~caomanhdat]



  was:
This ticket describes a framework for Optimizing, storing and deploying AI 
models within the Streaming Expression framework.




> Optimizing, storing and deploying AI models with Streaming Expressions
> --
>
> Key: SOLR-9258
> URL: https://issues.apache.org/jira/browse/SOLR-9258
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This ticket describes a framework for Optimizing, storing and deploying AI 
> models within the Streaming Expression framework.
> [~caomanhdat]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9258:
-
Description: 
This ticket describes a framework for Optimizing, storing and deploying AI 
models within the Streaming Expression framework.

[~cao man

> Optimizing, storing and deploying AI models with Streaming Expressions
> --
>
> Key: SOLR-9258
> URL: https://issues.apache.org/jira/browse/SOLR-9258
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This ticket describes a framework for Optimizing, storing and deploying AI 
> models within the Streaming Expression framework.
> [~cao man



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9258:
-
Description: 
This ticket describes a framework for Optimizing, storing and deploying AI 
models within the Streaming Expression framework.



  was:
This ticket describes a framework for Optimizing, storing and deploying AI 
models within the Streaming Expression framework.

[~cao man


> Optimizing, storing and deploying AI models with Streaming Expressions
> --
>
> Key: SOLR-9258
> URL: https://issues.apache.org/jira/browse/SOLR-9258
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>
> This ticket describes a framework for Optimizing, storing and deploying AI 
> models within the Streaming Expression framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9258) Optimizing, storing and deploying AI models with Streaming Expressions

2016-06-27 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-9258:


 Summary: Optimizing, storing and deploying AI models with 
Streaming Expressions
 Key: SOLR-9258
 URL: https://issues.apache.org/jira/browse/SOLR-9258
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9252) Feature selection and logistic regression on text

2016-06-27 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351965#comment-15351965
 ] 

Joel Bernstein edited comment on SOLR-9252 at 6/27/16 10:29 PM:


This is an exciting patch!

I closed out SOLR-9186 so work can focus on this patch.

I'll open another ticket describing a broader framework for *optimizing*, 
*storing* and *deploying* AI models within Streaming Expression framework and 
link it to this ticket.


was (Author: joel.bernstein):
This is an exciting patch!

I closed out SOLR-9186 so work can focus on this patch.

I'll open another ticket describing a broader framework for *optimizing*, 
*storing* and *deploying* AI models within SolrCloud and link it to this ticket.

> Feature selection and logistic regression on text
> -
>
> Key: SOLR-9252
> URL: https://issues.apache.org/jira/browse/SOLR-9252
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9252.patch, enron1.zip
>
>
> SOLR-9186 come up with a challenges that for each iterative we have to 
> rebuild the tf-idf vector for each documents. It is costly computation if we 
> represent doc by a lot of terms. Features selection can help reducing the 
> computation.
> Due to its computational efficiency and simple interpretation, information 
> gain is one of the most popular feature selection methods. It is used to 
> measure the dependence between features and labels and calculates the 
> information gain between the i-th feature and the class labels 
> (http://www.jiliang.xyz/publication/feature_selection_for_classification.pdf).
> I confirmed that by running logistics regressions on enron mail dataset (in 
> which each email is represented by top 100 terms that have highest 
> information gain) and got the accuracy by 92% and precision by 82%.
> This ticket will create two new streaming expression. Both of them use the 
> same *parallel iterative framework* as SOLR-8492.
> {code}
> featuresSelection(collection1, q="*:*",  field="tv_text", outcome="out_i", 
> positiveLabel=1, numTerms=100)
> {code}
> featuresSelection will emit top terms that have highest information gain 
> scores. It can be combined with new tlogit stream.
> {code}
> tlogit(collection1, q="*:*",
>  featuresSelection(collection1, 
>   q="*:*",  
>   field="tv_text", 
>   outcome="out_i", 
>   positiveLabel=1, 
>   numTerms=100),
>  field="tv_text",
>  outcome="out_i",
>  maxIterations=100)
> {code}
> In the iteration n, the text logistics regression will emit nth model, and 
> compute the error of (n-1)th model. Because the error will be wrong if we 
> compute the error dynamically in each iteration. 
> In each iteration tlogit will change learning rate based on error of previous 
> iteration. It will increase the learning rate by 5% if error is going down 
> and It will decrease the learning rate by 50% if error is going up.
> This will support use cases such as building models for spam detection, 
> sentiment analysis and threat detection. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-27 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351964#comment-15351964
 ] 

Gregory Chanan commented on SOLR-9076:
--

I added org.jboss.netty.netty version 3.2.4.Final and now I get this:
{code}
 2> java.lang.reflect.InvocationTargetException
   [junit4]   2>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   [junit4]   2>at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]   2>at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]   2>at java.lang.reflect.Method.invoke(Method.java:498)
   [junit4]   2>at 
org.apache.hadoop.metrics2.lib.MethodMetric$2.snapshot(MethodMetric.java:111)
   [junit4]   2>at 
org.apache.hadoop.metrics2.lib.MethodMetric.snapshot(MethodMetric.java:144)
   [junit4]   2>at 
org.apache.hadoop.metrics2.lib.MetricsRegistry.snapshot(MetricsRegistry.java:401)
   [junit4]   2>at 
org.apache.hadoop.metrics2.lib.MetricsSourceBuilder$1.getMetrics(MetricsSourceBuilder.java:79)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMetrics(MetricsSourceAdapter.java:194)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:172)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getMBeanInfo(MetricsSourceAdapter.java:151)
   [junit4]   2>at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getClassName(DefaultMBeanServerInterceptor.java:1804)
   [junit4]   2>at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.safeGetClassName(DefaultMBeanServerInterceptor.java:1595)
   [junit4]   2>at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanPermission(DefaultMBeanServerInterceptor.java:1813)
   [junit4]   2>at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.exclusiveUnregisterMBean(DefaultMBeanServerInterceptor.java:430)
   [junit4]   2>at 
com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.unregisterMBean(DefaultMBeanServerInterceptor.java:415)
   [junit4]   2>at 
com.sun.jmx.mbeanserver.JmxMBeanServer.unregisterMBean(JmxMBeanServer.java:546)
   [junit4]   2>at 
org.apache.hadoop.metrics2.util.MBeans.unregister(MBeans.java:81)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stopMBeans(MetricsSourceAdapter.java:226)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.stop(MetricsSourceAdapter.java:211)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stopSources(MetricsSystemImpl.java:463)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.stop(MetricsSystemImpl.java:213)
   [junit4]   2>at 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl.shutdown(MetricsSystemImpl.java:594)
   [junit4]   2>at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdownInstance(DefaultMetricsSystem.java:72)
   [junit4]   2>at 
org.apache.hadoop.metrics2.lib.DefaultMetricsSystem.shutdown(DefaultMetricsSystem.java:68)
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.namenode.metrics.NameNodeMetrics.shutdown(NameNodeMetrics.java:171)
   [junit4]   2>at 
org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:872)
   [junit4]   2>at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1726)
   [junit4]   2>at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdown(MiniDFSCluster.java:1705)
   [junit4]   2>at 
org.apache.solr.hadoop.MorphlineBasicMiniMRTest.teardownClass(MorphlineBasicMiniMRTest.java:196)
   [junit4]   2>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   [junit4]   2>at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]   2>at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]   2>at java.lang.reflect.Method.invoke(Method.java:498)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
   [junit4]   2>at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   [junit4]   2>   

[jira] [Commented] (SOLR-9252) Feature selection and logistic regression on text

2016-06-27 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351965#comment-15351965
 ] 

Joel Bernstein commented on SOLR-9252:
--

This is an exciting patch!

I closed out SOLR-9186 so work can focus on this patch.

I'll open another ticket describing a broader framework for *optimizing*, 
*storing* and *deploying* AI models within SolrCloud and link it to this ticket.

> Feature selection and logistic regression on text
> -
>
> Key: SOLR-9252
> URL: https://issues.apache.org/jira/browse/SOLR-9252
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
> Attachments: SOLR-9252.patch, enron1.zip
>
>
> SOLR-9186 come up with a challenges that for each iterative we have to 
> rebuild the tf-idf vector for each documents. It is costly computation if we 
> represent doc by a lot of terms. Features selection can help reducing the 
> computation.
> Due to its computational efficiency and simple interpretation, information 
> gain is one of the most popular feature selection methods. It is used to 
> measure the dependence between features and labels and calculates the 
> information gain between the i-th feature and the class labels 
> (http://www.jiliang.xyz/publication/feature_selection_for_classification.pdf).
> I confirmed that by running logistics regressions on enron mail dataset (in 
> which each email is represented by top 100 terms that have highest 
> information gain) and got the accuracy by 92% and precision by 82%.
> This ticket will create two new streaming expression. Both of them use the 
> same *parallel iterative framework* as SOLR-8492.
> {code}
> featuresSelection(collection1, q="*:*",  field="tv_text", outcome="out_i", 
> positiveLabel=1, numTerms=100)
> {code}
> featuresSelection will emit top terms that have highest information gain 
> scores. It can be combined with new tlogit stream.
> {code}
> tlogit(collection1, q="*:*",
>  featuresSelection(collection1, 
>   q="*:*",  
>   field="tv_text", 
>   outcome="out_i", 
>   positiveLabel=1, 
>   numTerms=100),
>  field="tv_text",
>  outcome="out_i",
>  maxIterations=100)
> {code}
> In the iteration n, the text logistics regression will emit nth model, and 
> compute the error of (n-1)th model. Because the error will be wrong if we 
> compute the error dynamically in each iteration. 
> In each iteration tlogit will change learning rate based on error of previous 
> iteration. It will increase the learning rate by 5% if error is going down 
> and It will decrease the learning rate by 50% if error is going up.
> This will support use cases such as building models for spam detection, 
> sentiment analysis and threat detection. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 984 - Failure!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/984/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.search.mlt.SimpleMLTQParserTest.doTest

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([428196A1A745BEEE:E5C52E05CAFEAD57]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
at 
org.apache.solr.search.mlt.SimpleMLTQParserTest.doTest(SimpleMLTQParserTest.java:82)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result/doc[1]/int[@name='id'][.='13']
xml response was: 

0116161616The slim 
red fox jumped over the lazy brown dogs.The slim red fox jumped over the lazy brown 

[jira] [Closed] (SOLR-9186) Logistic regression modeling for text

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein closed SOLR-9186.

Resolution: Won't Fix

> Logistic regression modeling for text
> -
>
> Key: SOLR-9186
> URL: https://issues.apache.org/jira/browse/SOLR-9186
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>
> SOLR-8492 optimizes a logistic regression model for numeric fields. While 
> this is interesting, I think it would be more interesting to build logistic 
> regression models on text within an inverted index.
> This ticket will use the same *parallel iterative framework* as SOLR-8492, 
> but different data access patterns on the shards, to optimize a logistic 
> regression model on text.
> This will support use cases such as building models for spam detection, 
> sentiment analysis and threat detection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9257) Basic Authentication - Internode Requests Fail With 401

2016-06-27 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351944#comment-15351944
 ] 

Ishan Chattopadhyaya commented on SOLR-9257:


Do you think this could be related to SOLR-9188?

> Basic Authentication - Internode Requests Fail With 401
> ---
>
> Key: SOLR-9257
> URL: https://issues.apache.org/jira/browse/SOLR-9257
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 6.1
>Reporter: Martin Löper
>  Labels: authentification, pki, security, ssl
>
> I enabled SSL successfully and subsequently also turned on the 
> BasicAuthentication Plugin along with Rule-Based Authentication in SolrCloud 
> mode. This works well when there is no inter-node communication. As soon as I 
> create a collection with 2 shards, I get the following exception for every 
> access of the "/select" request handler.
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":401,
> "QTime":181,
> "params":{
>   "q":"*:*",
>   "indent":"on",
>   "wt":"json",
>   "_":"1467062257216"}},
>   "error":{
> "metadata":[
>   
> "error-class","org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException",
>   
> "root-error-class","org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException"],
> "msg":"Error from server at 
> https://myserver.xxx.corp:8983/solr/mycollection_shard2_replica1: Expected 
> mime type application/octet-stream but got text/html. \n\n http-equiv=\"Content-Type\" 
> content=\"text/html;charset=utf-8\"/>\nError 401 Unauthorized request, 
> Response code: 401\n\nHTTP ERROR 
> 401\nProblem accessing /solr/mycollection_shard2_replica1/select. 
> Reason:\nUnauthorized request, Response code: 
> 401\n\n\n",
> "code":401}}
> There are also PKIAuthenticationPlugin exceptions before the exception above:
> Exception trying to get public key from : https://myserver.xxx.corp:8983/solr
> org.noggit.JSONParser$ParseException: JSON Parse Error: char=<,position=0 
> BEFORE='<' AFTER='html>  

[jira] [Commented] (SOLR-8297) Allow join query over 2 sharded collections: enhance functionality and exception handling

2016-06-27 Thread Shikha Somani (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351939#comment-15351939
 ] 

Shikha Somani commented on SOLR-8297:
-

Gentle reminder.

Please review above changes and merge if appropriate.

> Allow join query over 2 sharded collections: enhance functionality and 
> exception handling
> -
>
> Key: SOLR-8297
> URL: https://issues.apache.org/jira/browse/SOLR-8297
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrCloud
>Affects Versions: 5.3
>Reporter: Paul Blanchaert
>
> Enhancement based on SOLR-4905. New Jira issue raised as suggested by Mikhail 
> Khludnev.
> A) exception handling:
> The exception "SolrCloud join: multiple shards not yet supported" thrown in 
> the function findLocalReplicaForFromIndex of JoinQParserPlugin is not 
> triggered correctly: In my use-case, I've a join on a facet.query and when my 
> results are only found in 1 shard and the facet.query with the join is 
> querying the last replica of the last slice, then the exception is not thrown.
> I believe it's better to verify the nr of slices when we want to verify the  
> "multiple shards not yet supported" exception (so exception is thrown when 
> zkController.getClusterState().getSlices(fromIndex).size()>1).
> B) functional enhancement:
> I would expect that there is no problem to perform a cross-core join over 
> sharded collections when the following conditions are met:
> 1) both collections are sharded with the same replicationFactor and numShards
> 2) router.field of the collections is set to the same "key-field" (collection 
> of "fromindex" has router.field = "from" field and collection joined to has 
> router.field = "to" field)
> The router.field setup ensures that documents with the same "key-field" are 
> routed to the same node. 
> So the combination based on the "key-field" should always be available within 
> the same node.
> From a user perspective, I believe these assumptions seem to be a "normal" 
> use-case in the cross-core join in SolrCloud.
> Hope this helps



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7357) TestGeo3DPoint.testGeo3DRelations() failure: invalid bounds for shape=GeoStandardPath

2016-06-27 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351921#comment-15351921
 ] 

Karl Wright commented on LUCENE-7357:
-

The point is definitely supposed to be within the path, so isWithin() is 
working properly. It is within the segment part of the path, so the bounds for 
the segment should yield a bounding solid that is outside of the point.

But this isn't what happens.  The segment bounds include the four corner points:

{code}
   [junit4]   2> ULHC=[X=0.4449938827091463, Y=-0.8967828076912789, 
Z=-9.724499511975725E-13]
   [junit4]   2> URHC=[X=0.4449938827091463, Y=-0.8967828076912789, Z=0.0]
   [junit4]   2> LLHC=[X=0.4449938827091461, Y=0.8967828076912789, 
Z=-9.724499511975723E-13]
   [junit4]   2> LRHC=[X=0.4449938827091461, Y=0.8967828076912789, Z=0.0]
{code}

... none of which have an X value that would include the point's X value of 
0.44586529864043345.  The planes also yield a maximum/minimum X consistent with 
the same X values above.

The interesting thing is that the segment is actually a slice all the way 
through the world.  This is because it effectively has Z bounds and Y bounds 
but no X bounds.  It's an extremely thin slice, and thus what is considered 
inside might well be extended quite a distance on the other side if a line 
through the origin.  We've seen cases like this before.

So when we compute the bounds for this path segment, we have to be careful to 
allow for the MINIMUM_DISTANCE offset from the shape edge.  Therefore, I tried 
introducing .addIntersection() calls for the bounds for all four intersections 
between PathSegment edges, but this did not fix the problem.  I therefore have 
to think about why it didn't work.

> TestGeo3DPoint.testGeo3DRelations() failure: invalid bounds for 
> shape=GeoStandardPath
> -
>
> Key: LUCENE-7357
> URL: https://issues.apache.org/jira/browse/LUCENE-7357
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Steve Rowe
>Assignee: Karl Wright
>
> From [https://builds.apache.org/job/Lucene-Solr-Tests-master/1228/]:
> {noformat}
> Checking out Revision 46c827e31a5534bb032d0803318d01309bf0195c 
> (refs/remotes/origin/master)
> [...]
>   [junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
>   [junit4]   1> doc=1544 is contained by shape but is outside the 
> returned XYZBounds
>   [junit4]   1>   unquantized=[lat=-2.848117399637174E-91, 
> lon=-1.1092122135274942([X=0.44586529864043345, Y=-0.8963498732568058, 
> Z=-2.851304027160807E-91])]
>   [junit4]   1>   quantized=[X=0.44586529870253566, 
> Y=-0.8963498734280969, Z=-2.3309121299774915E-10]
>   [junit4]   1>   shape=GeoStandardPath: {planetmodel=PlanetModel.WGS84, 
> width=1.117010721276371(64.0), points={[[lat=2.18531083006635E-12, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=2.187755873813378E-12])], [lat=0.0, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=0.0])]]}}
>   [junit4]   1>   bounds=XYZBounds: [xmin=-1.0011188549924792 
> xmax=0.4449938894797613 ymin=-1.0011188549924792 ymax=1.0011188549924792 
> zmin=-0.9977622930221051 zmax=0.9977622930221051]
>   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
> -Dtests.method=testGeo3DRelations -Dtests.seed=1F71744AE2101863 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=pt-PT 
> -Dtests.timezone=Europe/Berlin -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>   [junit4] FAILURE 1.46s J1 | TestGeo3DPoint.testGeo3DRelations <<<
>   [junit4]> Throwable #1: java.lang.AssertionError: invalid bounds for 
> shape=GeoStandardPath: {planetmodel=PlanetModel.WGS84, 
> width=1.117010721276371(64.0), points={[[lat=2.18531083006635E-12, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=2.187755873813378E-12])], [lat=0.0, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=0.0])]]}}
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([1F71744AE2101863:AF0E09DE6D5DB6FF]:0)
>   [junit4]>   at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:259)
>   [junit4]>   at java.lang.Thread.run(Thread.java:745)
>   [junit4] IGNOR/A 0.00s J1 | TestGeo3DPoint.testRandomBig
>   [junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
>   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62), 
> sim=RandomSimilarity(queryNorm=false,coord=yes): {}, locale=pt-PT, 
> timezone=Europe/Berlin
>   [junit4]   2> NOTE: Linux 3.13.0-85-generic amd64/Oracle Corporation 
> 1.8.0_74 (64-bit)/cpus=4,threads=1,free=256210224,total=354418688
>   [junit4]   2> NOTE: All tests run in this 

[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-27 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351895#comment-15351895
 ] 

Gregory Chanan commented on SOLR-9076:
--

bq. Strange, I wonder why that didn't show up when I ran the tests? Maybe I 
need a different profile.

It's a nightly test.  I was able to reproduce it.

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9255) Start Script Basic Authentication

2016-06-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351794#comment-15351794
 ] 

Martin Löper edited comment on SOLR-9255 at 6/27/16 9:42 PM:
-

In 
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/util/SolrCLI.java
 we find:
System.getProperty("solr.authentication.httpclient.builder");

In https://github.com/apache/lucene-solr/blob/master/solr/bin/solr.in.sh 
l.117-119 there is:
#Settings for authentication
#SOLR_AUTHENTICATION_CLIENT_CONFIGURER=
#SOLR_AUTHENTICATION_OPTS=

In https://github.com/apache/lucene-solr/blob/master/solr/bin/solr l.160:
AUTHC_CLIENT_CONFIGURER_ARG="-Dsolr.authentication.httpclient.configurer=$SOLR_AUTHENTICATION_CLIENT_CONFIGURER"

As far as I see, the name of the system property changed from 
solr.authentication.httpclient.configurer to 
solr.authentication.httpclient.builder, but was not correctly adjusted in 
solr's starter script.
Can you confirm that? Is that why the SOLR_AUTHENTICATION_CLIENT_CONFIGURER 
shell variable misbehaves?



was (Author: martinloeper):
In 
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/util/SolrCLI.java
 we find:
System.getProperty("solr.authentication.httpclient.builder");

In https://github.com/apache/lucene-solr/blob/master/solr/bin/solr.in.sh 
l.117-119 there is:
#Settings for authentication
#SOLR_AUTHENTICATION_CLIENT_CONFIGURER=
#SOLR_AUTHENTICATION_OPTS=

In https://github.com/apache/lucene-solr/blob/master/solr/bin/solr l.160:
AUTHC_CLIENT_CONFIGURER_ARG="-Dsolr.authentication.httpclient.configurer=$SOLR_AUTHENTICATION_CLIENT_CONFIGURER"

As far as I see, the name of the system property changed from 
solr.authentication.httpclient.configurer to 
solr.authentication.httpclient.builder, but was not correctly adjusted in 
solr.in.sh.
Can you confirm that? Is that why the SOLR_AUTHENTICATION_CLIENT_CONFIGURER 
shell variable misbehaves?


> Start Script Basic Authentication
> -
>
> Key: SOLR-9255
> URL: https://issues.apache.org/jira/browse/SOLR-9255
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 6.1
>Reporter: Martin Löper
>
> I configured SSL and BasicAuthentication with Rule-Based-Authorization.
> I noticed that since the latest changes from 6.0.1 to 6.1.0 I cannot pass the 
> Basic Authentication Credentials to the Solr Start Script anymore. For the 
> previous release I did this via the bin/solr.in.sh shellscript.
> What has happened with the SOLR_AUTHENTICATION_CLIENT_CONFIGURER and 
> SOLR_AUTHENTICATION_OPTS parameters? Are they still in use or is there a new 
> way to pass basic auth credentials on the command-line?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9257) Basic Authentication - Internode Request Fail With 401

2016-06-27 Thread JIRA
Martin Löper created SOLR-9257:
--

 Summary: Basic Authentication - Internode Request Fail With 401
 Key: SOLR-9257
 URL: https://issues.apache.org/jira/browse/SOLR-9257
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Authentication
Affects Versions: 6.1
Reporter: Martin Löper


I enabled SSL successfully and subsequently also turned on the 
BasicAuthentication Plugin along with Rule-Based Authentication in SolrCloud 
mode. This works well when there is no inter-node communication. As soon as I 
create a collection with 2 shards, I get the following exception for every 
access of the "/select" request handler.

{
  "responseHeader":{
"zkConnected":true,
"status":401,
"QTime":181,
"params":{
  "q":"*:*",
  "indent":"on",
  "wt":"json",
  "_":"1467062257216"}},
  "error":{
"metadata":[
  
"error-class","org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException",
  
"root-error-class","org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException"],
"msg":"Error from server at 
https://myserver.xxx.corp:8983/solr/mycollection_shard2_replica1: Expected mime 
type application/octet-stream but got text/html. \n\n\nError 
401 Unauthorized request, Response code: 401\n\nHTTP 
ERROR 401\nProblem accessing /solr/mycollection_shard2_replica1/select. 
Reason:\nUnauthorized request, Response code: 
401\n\n\n",
"code":401}}

There are also PKIAuthenticationPlugin exceptions before the exception above:
Exception trying to get public key from : https://myserver.xxx.corp:8983/solr

org.noggit.JSONParser$ParseException: JSON Parse Error: char=<,position=0 
BEFORE='<' AFTER='html>  

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+124) - Build # 17079 - Still Failing!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17079/
Java: 64bit/jdk-9-ea+124 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'f' for path 'params/fixed' full output: {   
"responseHeader":{ "status":0, "QTime":0},   "params":{ 
"add":"second", "a":"A val", "fixed":"changeit", "b":"B val", 
"wt":"json"},   "context":{ "webapp":"", "path":"/dump1", 
"httpMethod":"GET"}},  from server:  https://127.0.0.1:38847/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'f' for path 
'params/fixed' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "params":{
"add":"second",
"a":"A val",
"fixed":"changeit",
"b":"B val",
"wt":"json"},
  "context":{
"webapp":"",
"path":"/dump1",
"httpMethod":"GET"}},  from server:  https://127.0.0.1:38847/collection1
at 
__randomizedtesting.SeedInfo.seed([396992DD12AAE03C:B13DAD07BC568DC4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:241)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Comment Edited] (SOLR-9255) Start Script Basic Authentication

2016-06-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351794#comment-15351794
 ] 

Martin Löper edited comment on SOLR-9255 at 6/27/16 8:56 PM:
-

In 
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/util/SolrCLI.java
 we find:
System.getProperty("solr.authentication.httpclient.builder");

In https://github.com/apache/lucene-solr/blob/master/solr/bin/solr.in.sh 
l.117-119 there is:
#Settings for authentication
#SOLR_AUTHENTICATION_CLIENT_CONFIGURER=
#SOLR_AUTHENTICATION_OPTS=

In https://github.com/apache/lucene-solr/blob/master/solr/bin/solr l.160:
AUTHC_CLIENT_CONFIGURER_ARG="-Dsolr.authentication.httpclient.configurer=$SOLR_AUTHENTICATION_CLIENT_CONFIGURER"

As far as I see, the name of the system property changed from 
solr.authentication.httpclient.configurer to 
solr.authentication.httpclient.builder, but was not correctly adjusted in 
solr.in.sh.
Can you confirm that? Is that why the SOLR_AUTHENTICATION_CLIENT_CONFIGURER 
shell variable misbehaves?



was (Author: martinloeper):
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/util/SolrCLI.java
 we find:
System.getProperty("solr.authentication.httpclient.builder");

In https://github.com/apache/lucene-solr/blob/master/solr/bin/solr.in.sh 
l.117-119 there is:
# Settings for authentication
#SOLR_AUTHENTICATION_CLIENT_CONFIGURER=
#SOLR_AUTHENTICATION_OPTS=

In https://github.com/apache/lucene-solr/blob/master/solr/bin/solr l.160:
AUTHC_CLIENT_CONFIGURER_ARG="-Dsolr.authentication.httpclient.configurer=$SOLR_AUTHENTICATION_CLIENT_CONFIGURER"

As far as I see, the name of the system property changed from 
solr.authentication.httpclient.configurer to 
solr.authentication.httpclient.builder, but was not correctly adjusted in 
solr.in.sh.
Can you confirm that? Is that why the SOLR_AUTHENTICATION_CLIENT_CONFIGURER 
shell variable misbehaves?


> Start Script Basic Authentication
> -
>
> Key: SOLR-9255
> URL: https://issues.apache.org/jira/browse/SOLR-9255
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 6.1
>Reporter: Martin Löper
>
> I configured SSL and BasicAuthentication with Rule-Based-Authorization.
> I noticed that since the latest changes from 6.0.1 to 6.1.0 I cannot pass the 
> Basic Authentication Credentials to the Solr Start Script anymore. For the 
> previous release I did this via the bin/solr.in.sh shellscript.
> What has happened with the SOLR_AUTHENTICATION_CLIENT_CONFIGURER and 
> SOLR_AUTHENTICATION_OPTS parameters? Are they still in use or is there a new 
> way to pass basic auth credentials on the command-line?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9255) Start Script Basic Authentication

2016-06-27 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351794#comment-15351794
 ] 

Martin Löper commented on SOLR-9255:


https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/util/SolrCLI.java
 we find:
System.getProperty("solr.authentication.httpclient.builder");

In https://github.com/apache/lucene-solr/blob/master/solr/bin/solr.in.sh 
l.117-119 there is:
# Settings for authentication
#SOLR_AUTHENTICATION_CLIENT_CONFIGURER=
#SOLR_AUTHENTICATION_OPTS=

In https://github.com/apache/lucene-solr/blob/master/solr/bin/solr l.160:
AUTHC_CLIENT_CONFIGURER_ARG="-Dsolr.authentication.httpclient.configurer=$SOLR_AUTHENTICATION_CLIENT_CONFIGURER"

As far as I see, the name of the system property changed from 
solr.authentication.httpclient.configurer to 
solr.authentication.httpclient.builder, but was not correctly adjusted in 
solr.in.sh.
Can you confirm that? Is that why the SOLR_AUTHENTICATION_CLIENT_CONFIGURER 
shell variable misbehaves?


> Start Script Basic Authentication
> -
>
> Key: SOLR-9255
> URL: https://issues.apache.org/jira/browse/SOLR-9255
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 6.1
>Reporter: Martin Löper
>
> I configured SSL and BasicAuthentication with Rule-Based-Authorization.
> I noticed that since the latest changes from 6.0.1 to 6.1.0 I cannot pass the 
> Basic Authentication Credentials to the Solr Start Script anymore. For the 
> previous release I did this via the bin/solr.in.sh shellscript.
> What has happened with the SOLR_AUTHENTICATION_CLIENT_CONFIGURER and 
> SOLR_AUTHENTICATION_OPTS parameters? Are they still in use or is there a new 
> way to pass basic auth credentials on the command-line?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9076) Update to Hadoop 2.7.2

2016-06-27 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351795#comment-15351795
 ] 

Gregory Chanan commented on SOLR-9076:
--

Strange, I wonder why that didn't show up when I ran the tests?  Maybe I need a 
different profile.

I'll take a look.

> Update to Hadoop 2.7.2
> --
>
> Key: SOLR-9076
> URL: https://issues.apache.org/jira/browse/SOLR-9076
> Project: Solr
>  Issue Type: Improvement
>Reporter: Mark Miller
>Assignee: Mark Miller
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch, 
> SOLR-9076.patch, SOLR-9076.patch, SOLR-9076.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-06-27 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351782#comment-15351782
 ] 

Uwe Schindler commented on LUCENE-7355:
---

Hi,
I have to think about this! Do we really need to change Analyzer's API? To me 
it sounds a bit strange to replace the Tokenizer by KeywordTokenizer by 
default...

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7355.patch, LUCENE-7355.patch
>
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Artifacts-6.x - Build # 97 - Failure

2016-06-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-6.x/97/

No tests ran.

Build Log:
[...truncated 8078 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/build.xml:480: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/common-build.xml:2496:
 Can't get https://issues.apache.org/jira/rest/api/2/project/LUCENE to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/build/docs/changes/jiraVersionList.json

Total time: 2 minutes 57 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-9248) HttpSolrClient not compatible with compression option

2016-06-27 Thread Gary Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351752#comment-15351752
 ] 

Gary Lee commented on SOLR-9248:


[~mdrob] Looks like 5.5.2 was just released, but not sure when I'll have a 
chance to integrate it with our application to test this.

However, I did look at the solr 5.5.2 source code and based on what I see, I 
don't believe it is resolved yet. I'm still seeing the same call in 
HttpSolrClient.executeMethod to close the entity and associated response input 
stream using Utils.consumeFully, and this is where the problem occurs.


> HttpSolrClient not compatible with compression option
> -
>
> Key: SOLR-9248
> URL: https://issues.apache.org/jira/browse/SOLR-9248
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.5, 5.5.1
>Reporter: Gary Lee
>
> Since Solr 5.5, using the compression option 
> (solrClient.setAllowCompression(true)) causes the HTTP client to quickly run 
> out of connections in the connection pool. After debugging through this, we 
> found that the GZIPInputStream is incompatible with changes to how the 
> response input stream is closed in 5.5. It is at this point when the 
> GZIPInputStream throws an EOFException, and while this is silently eaten up, 
> the net effect is that the stream is never closed, leaving the connection 
> open. After a number of requests, the pool is exhausted and no further 
> requests can be served.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Lucene/Solr release management

2016-06-27 Thread Steve Rowe
So I changed my mind: I think calling the script manageRelease.py was 
premature, since it only did the one thing, so I’ve renamed it to 
releasedJirasRegex.py and cleaned it up a bit.

If we decide to go the central script route, we can add manageRelease.py then.  
That script will likely just call out to other (standalone) scripts; it 
probably won’t hold any task implementations.

--
Steve
www.lucidworks.com

> On Jun 24, 2016, at 7:56 PM, Steve Rowe  wrote:
> 
> I think rather than have a bunch of scripts, one per task, we could have a 
> single one that takes commands and does stuff based on the command.
> 
> In this spirit, I’ve created dev-tools/scripts/manageRelease.py, which only 
> does one thing right now: makes regexes for all JIRAs included in a release 
> by parsing the CHANGES.txt files.  (This is to simplify the task of figuring 
> out which JIRAs in a release are duplicated in CHANGES.txt under previously 
> unreleased versions - with a regex, you can do a search/match-all in a 
> regex-aware editor and very quickly find matching JIRAs under other 
> releases.)  The script will need work to accept commands to do other things, 
> though: I didn’t put anything in place to do the command-switching thing.
> 
> If we don’t want to go this way, we can rename manageRelease.py to 
> releasedJiraRegexes.py or something. 
> 
> --
> Steve
> www.lucidworks.com
> 
>> On Jun 1, 2016, at 2:36 PM, Chris Hostetter  wrote:
>> 
>> 
>> : 1. I think if used this will become the de facto source of RM 
>> : documentation, so it will have to be able to do a full listing of all 
>> : steps, maybe also the steps that *aren’t* included too for the given 
>> : release type, so that the RM can see/verify the full context.
>> 
>> wouldn't that just be a matter of reading the "original" JSON that's 
>> commited into dev-tools (ie: not hte copy currently being worked through)
>> 
>> : 2. The JSON file shouldn’t be stored in the source tree; several steps 
>> : look for/assure clean checkouts.  Maybe a configurable location with a 
>> : default uner /tmp/releases/X.Y.Z/ (already used by at least one release 
>> : script now).
>> 
>> ok, sure ... i was suggesting it could be explicitly .gitignored, but i'm 
>> not hung up on the details.
>> 
>> : 3. Some things can be safely done out of order, while others have 
>> : prerequisites.  Maybe the script could somehow make these dependencies 
>> : visible?  Skipping and out of order completion (and progress for some 
>> : manual multi-step things) should be supported as well.
>> 
>> sure ... again: these seem like minor details to the boarder goal of 
>> "script everything, including the checklist"
>> 
>> If this is a goal folks think make sense, then frankly scripting the 
>> checklist itself seems like it should be something we worry about wy 
>> down the road.
>> 
>> Starting with "add more scripts that echo the exact commands to 
>> copy/paste based on the version# + RC# of the release" seems like where 
>> the first big wins could come from...
>> 
>> Examples:
>> 
>> updateVersionNumbersOnAllAffectedBranches.py 6.2.0 ~/lucene/my-checkout
>> 1. looks at the list of branches/tags in ~/lucene/my-checkout
>> 2. echos the exact list of "git co", "addVersion.py", 
>>   and "git commit" commands you should run based on 6.2.0 being 
>>   a minor release
>> 3. warns you if any expected backcompat indexes are missing on 
>>   any branches
>> 4. echos the "git push origin ..." command listing all affected 
>>   branches
>> 5. if you run it again, after running some of the commands,
>>   only echos the commands that are still needed
>> 
>> tagAndPublish.py lucene-solr-6.2.0-RC2 DEADBEEF
>> 1. echos the exact tag command to run
>> 2. echos the exact svm mv && svn rm commands for the dist repo
>>   - including rm'ing RC0 and RC1 if they still exist
>> 3. echos the exact ant commands to publish to maven
>> 4. echos the instructions/url to close & release on maven central
>> 5. echos the "git push origin ..." command for the tag
>> 6. if you run it again, after running some of the commands,
>>   checks that the tag matches DEADBEEF, and only echos the commands 
>>   that are still needed
>> 
>> etc...
>> 
>> 
>> 
>> 
>> 
>> -Hoss
>> http://www.lucidworks.com/
>> 
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3368 - Failure!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3368/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.lucene.search.join.TestJoinUtil.testSingleValueRandomJoin

Error Message:
expected: but 
was:

Stack Trace:
java.lang.AssertionError: 
expected: but 
was:
at 
__randomizedtesting.SeedInfo.seed([73FAC3EDF60697FE:9F42D2A77EC58B21]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.search.join.TestJoinUtil.assertBitSet(TestJoinUtil.java:1046)
at 
org.apache.lucene.search.join.TestJoinUtil.executeRandomJoin(TestJoinUtil.java:1023)
at 
org.apache.lucene.search.join.TestJoinUtil.testSingleValueRandomJoin(TestJoinUtil.java:938)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 7195 lines...]
   [junit4] Suite: org.apache.lucene.search.join.TestJoinUtil
   [junit4]   2> NOTE: reproduce with: ant test  

[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_92) - Build # 277 - Failure!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/277/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestDirectory_2359A06B3314D9E6-001\testDirectInstantiation-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestDirectory_2359A06B3314D9E6-001\testDirectInstantiation-001

C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestDirectory_2359A06B3314D9E6-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestDirectory_2359A06B3314D9E6-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestDirectory_2359A06B3314D9E6-001\testDirectInstantiation-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestDirectory_2359A06B3314D9E6-001\testDirectInstantiation-001
   
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestDirectory_2359A06B3314D9E6-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\core\test\J1\temp\lucene.store.TestDirectory_2359A06B3314D9E6-001

at __randomizedtesting.SeedInfo.seed([2359A06B3314D9E6]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:323)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 800 lines...]
   [junit4] Suite: org.apache.lucene.store.TestDirectory
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): {}, 
docValues:{}, maxPointsInLeafNode=746, maxMBSortInHeap=5.276150363013141, 
sim=RandomSimilarity(queryNorm=true,coord=no): {}, locale=hr-HR, 
timezone=America/Rio_Branco
   [junit4]   2> NOTE: Windows 10 10.0 amd64/Oracle Corporation 1.8.0_92 
(64-bit)/cpus=3,threads=1,free=83635720,total=234881024
   [junit4]   2> NOTE: All tests run in this JVM: [TestComplexExplanations, 
TestIndexWriterOnJRECrash, TestSimilarity2, TestMmapDirectory, 
TestNoDeletionPolicy, TestMultiPhraseEnum, TestDocBoost, TestBooleanCoord, 
TestRollingUpdates, TestLucene62SegmentInfoFormat, TestStressAdvance, 
TestLazyProxSkipping, TestDocumentWriter, TestTermVectors, TestSegmentInfos, 
TestStandardAnalyzer, TestByteSlices, TestLucene50CompoundFormat, 
TestUniqueTermCount, TestSearch, TestWildcard, TestByteArrayDataInput, 
TestNRTThreads, TestPersistentSnapshotDeletionPolicy, TestMaxTermFrequency, 
TestSortedNumericSortField, TestConjunctions, TestSimpleAttributeImpl, 
TestTopFieldCollector, TestBufferedChecksum, TestGeoEncodingUtils, 
TestDocCount, TestPerFieldDocValuesFormat, TestAssertions, 
TestTermVectorsWriter, TestLSBRadixSorter, TestStressIndexing2, 
TestTragicIndexWriterDeadlock, TestSpanFirstQuery, FiniteStringsIteratorTest, 
TestSegmentMerger, TestRegExp, TestTermRangeQuery, TestQueryBuilder, 
TestNRTReaderCleanup, TestCharsRef, TestBytesRefHash, TestLockFactory, 
TestSameScoresWithThreads, TestLRUQueryCache, TestDeterminizeLexicon, 
TestMSBRadixSorter, TestPackedInts, TestSortRandom, TestSimpleFSDirectory, 
TestSizeBoundedForceMerge, TestSingleInstanceLockFactory, 
TestForTooMuchCloning, TestOmitTf, TestHugeRamFile, TestSearcherManager, 
TestFileSwitchDirectory, TestStringHelper, TestStressDeletes, 
TestPerFieldPostingsFormat, TestDirectMonotonic, TestDocIDMerger, 
TestAttributeSource, TestSloppyPhraseQuery2, 

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_92) - Build # 5939 - Still Failing!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5939/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
There are still nodes recoverying - waited for 120 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 120 
seconds
at 
__randomizedtesting.SeedInfo.seed([24A2BA82C8EE8246:ACF685586612EFBE]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:182)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:862)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1418)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 17078 - Still Failing!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17078/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([82819A53AB40DAC5:75F2740B6DA87523]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1327)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11571 lines...]
   [junit4] Suite: 

[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 95 - Failure

2016-06-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/95/

No tests ran.

Build Log:
[...truncated 40566 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.03 sec (5.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.2.0-src.tgz...
   [smoker] 29.8 MB in 0.03 sec (1192.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.2.0.tgz...
   [smoker] 64.4 MB in 0.05 sec (1213.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.2.0.zip...
   [smoker] 75.0 MB in 0.06 sec (1204.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6032 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.2.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6032 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.2.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 224 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (18.0 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.2.0-src.tgz...
   [smoker] 39.1 MB in 1.00 sec (39.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.2.0.tgz...
   [smoker] 137.1 MB in 1.50 sec (91.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.2.0.zip...
   [smoker] 145.7 MB in 1.21 sec (120.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.2.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.2.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.2.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]   [-]  
   

[jira] [Commented] (SOLR-9248) HttpSolrClient not compatible with compression option

2016-06-27 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351588#comment-15351588
 ] 

Mike Drob commented on SOLR-9248:
-

[~gary.lee] - This might be related to SOLR-8933, which was fixed in 5.5.2 and 
5.6. Can you try one of those and see if the problem still persists?

> HttpSolrClient not compatible with compression option
> -
>
> Key: SOLR-9248
> URL: https://issues.apache.org/jira/browse/SOLR-9248
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.5, 5.5.1
>Reporter: Gary Lee
>
> Since Solr 5.5, using the compression option 
> (solrClient.setAllowCompression(true)) causes the HTTP client to quickly run 
> out of connections in the connection pool. After debugging through this, we 
> found that the GZIPInputStream is incompatible with changes to how the 
> response input stream is closed in 5.5. It is at this point when the 
> GZIPInputStream throws an EOFException, and while this is silently eaten up, 
> the net effect is that the stream is never closed, leaving the connection 
> open. After a number of requests, the pool is exhausted and no further 
> requests can be served.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7191) Improve stability and startup performance of SolrCloud with thousands of collections

2016-06-27 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-7191:
-
Attachment: SOLR-7191.patch

> Improve stability and startup performance of SolrCloud with thousands of 
> collections
> 
>
> Key: SOLR-7191
> URL: https://issues.apache.org/jira/browse/SOLR-7191
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 5.0
>Reporter: Shawn Heisey
>Assignee: Shalin Shekhar Mangar
>  Labels: performance, scalability
> Attachments: SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> SOLR-7191.patch, SOLR-7191.patch, SOLR-7191.patch, 
> lots-of-zkstatereader-updates-branch_5x.log
>
>
> A user on the mailing list with thousands of collections (5000 on 4.10.3, 
> 4000 on 5.0) is having severe problems with getting Solr to restart.
> I tried as hard as I could to duplicate the user setup, but I ran into many 
> problems myself even before I was able to get 4000 collections created on a 
> 5.0 example cloud setup.  Restarting Solr takes a very long time, and it is 
> not very stable once it's up and running.
> This kind of setup is very much pushing the envelope on SolrCloud performance 
> and scalability.  It doesn't help that I'm running both Solr nodes on one 
> machine (I started with 'bin/solr -e cloud') and that ZK is embedded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 17077 - Failure!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17077/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithTimeDelay

Error Message:
Could not find collection : c1

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : c1
at 
__randomizedtesting.SeedInfo.seed([C054B4FC15AFAE2A:BFCA03797CCD83A0]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:192)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:129)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithTimeDelay(ZkStateReaderTest.java:51)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11501 lines...]
   [junit4] Suite: org.apache.solr.cloud.overseer.ZkStateReaderTest
   [junit4]   2> Creating dataDir: 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 675 - Failure!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/675/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.search.stats.TestDistribIDF

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.search.stats.TestDistribIDF: 1) Thread[id=60619, 
name=OverseerHdfsCoreFailoverThread-96144275745472520-127.0.0.1:61633_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.search.stats.TestDistribIDF: 
   1) Thread[id=60619, 
name=OverseerHdfsCoreFailoverThread-96144275745472520-127.0.0.1:61633_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([57F7E95D89C6B623]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.search.stats.TestDistribIDF

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=60619, 
name=OverseerHdfsCoreFailoverThread-96144275745472520-127.0.0.1:61633_solr-n_01,
 state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.interrupt0(Native Method) at 
java.lang.Thread.interrupt(Thread.java:923) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=60619, 
name=OverseerHdfsCoreFailoverThread-96144275745472520-127.0.0.1:61633_solr-n_01,
 state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.interrupt0(Native Method)
at java.lang.Thread.interrupt(Thread.java:923)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([57F7E95D89C6B623]:0)




Build Log:
[...truncated 12338 lines...]
   [junit4] Suite: org.apache.solr.search.stats.TestDistribIDF
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.search.stats.TestDistribIDF_57F7E95D89C6B623-001/init-core-data-001
   [junit4]   2> 3561086 INFO  
(SUITE-TestDistribIDF-seed#[57F7E95D89C6B623]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 3561088 INFO  
(TEST-TestDistribIDF.testSimpleQuery-seed#[57F7E95D89C6B623]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testSimpleQuery
   [junit4]   2> 3561088 INFO  
(TEST-TestDistribIDF.testSimpleQuery-seed#[57F7E95D89C6B623]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 3561089 INFO  (Thread-13140) [] o.a.s.c.ZkTestServer 
client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 3561089 INFO  (Thread-13140) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 3561189 INFO  
(TEST-TestDistribIDF.testSimpleQuery-seed#[57F7E95D89C6B623]) [] 
o.a.s.c.ZkTestServer start zk server on port:50961
   [junit4]   2> 3561192 INFO  
(TEST-TestDistribIDF.testSimpleQuery-seed#[57F7E95D89C6B623]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 3561193 INFO  
(TEST-TestDistribIDF.testSimpleQuery-seed#[57F7E95D89C6B623]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 3561197 INFO  (zkCallback-12196-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@4333c9d6 
name:ZooKeeperConnection Watcher:127.0.0.1:50961 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 3561197 INFO  
(TEST-TestDistribIDF.testSimpleQuery-seed#[57F7E95D89C6B623]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 3561197 INFO  
(TEST-TestDistribIDF.testSimpleQuery-seed#[57F7E95D89C6B623]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 3561197 INFO  
(TEST-TestDistribIDF.testSimpleQuery-seed#[57F7E95D89C6B623]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr/solr.xml
   [junit4]   2> 3561204 WARN  (NIOServerCxn.Factory:0.0.0.0/0.0.0.0:0) [] 
o.a.z.s.NIOServerCnxn caught 

[jira] [Commented] (SOLR-9254) TestGraphTermsQParserPlugin.testQueries() NullPointerException

2016-06-27 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351364#comment-15351364
 ] 

Joel Bernstein commented on SOLR-9254:
--

There is an NPE that needs to be guarded against, incase an index segment 
doesn't include the field being searched. I will push a fix for this today.

> TestGraphTermsQParserPlugin.testQueries() NullPointerException
> --
>
> Key: SOLR-9254
> URL: https://issues.apache.org/jira/browse/SOLR-9254
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Joel Bernstein
>
> My Jenkins found a reproducing seed on branch_6x:
> {noformat}
> Checking out Revision d1a047ad6f24078f23c9b4adf15210ac8a6e8f8a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGraphTermsQParserPlugin -Dtests.method=testQueries 
> -Dtests.seed=E47472DC605D2D21 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr-Latn-ME -Dtests.timezone=America/Guadeloupe 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J11 | TestGraphTermsQParserPlugin.testQueries <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
> query
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E47472DC605D2D21:B8FABE077A34988F]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
>[junit4]>  at 
> org.apache.solr.search.TestGraphTermsQParserPlugin.testQueries(TestGraphTermsQParserPlugin.java:76)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.rewrite(GraphTermsQParserPlugin.java:223)
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.bulkScorer(GraphTermsQParserPlugin.java:252)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:261)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1818)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1635)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:644)
>[junit4]>  at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:528)
>[junit4]>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>[junit4]>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>[junit4]>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2035)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:310)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:292)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
>[junit4]>  ... 41 more
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {test_tl=PostingsFormat(name=Direct), _version_=BlockTreeOrds(blocksize=128), 
> test_ti=BlockTreeOrds(blocksize=128), term_s=PostingsFormat(name=Asserting), 
> test_tf=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> group_s=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
> maxPointsInLeafNode=1102, maxMBSortInHeap=5.004024995692577, 
> sim=ClassicSimilarity, locale=sr-Latn-ME, timezone=America/Guadeloupe
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=283609904,total=531628032
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-06-27 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351343#comment-15351343
 ] 

Adrien Grand commented on LUCENE-7355:
--

This sounds good to me.

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7355.patch, LUCENE-7355.patch
>
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Early Access builds of JDK 8u112 b01, JDK 9 b124 are available on java.net

2016-06-27 Thread Rory O'Donnell

Thanks for the feedback Uwe!

Rgds,Rory


On 27/06/2016 12:37, Uwe Schindler wrote:


Hallo,

I installed this version on Saturday: All looks fine up to now.

Uwe

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

http://www.thetaphi.de 

eMail: u...@thetaphi.de

*From:*Rory O'Donnell [mailto:rory.odonn...@oracle.com]
*Sent:* Monday, June 27, 2016 11:27 AM
*To:* Uwe Schindler 
*Cc:* rory.odonn...@oracle.com; Dalibor Topic 
; Balchandra Vaidya 
; Muneer Kolarkunnu 
; Dawid Weiss 
; dev@lucene.apache.org
*Subject:* Early Access builds of JDK 8u112 b01, JDK 9 b124 are 
available on java.net



Hi Uwe & Dawid,

Early Access b124  for JDK 9 is 
available on java.net, summary of  changes are listed here 
.


Early Access b123  ( #5178) for JDK 9 
with Project Jigsaw is available on java.net, summary of  changes are 
listed here 



Early Access b01  for JDK 8u112 
is available on java.net.


Update to JEP 261 : Module System - email from Mark Reinhold [1]

- The special ALL-DEFAULT module name, which represents the default 
set of root modules for use with the -addmods option [2];
- A more thorough explanation of how the built-in class loaders have 
changed, how built-in modules are assigned to each loader,

   and how these loaders work together to load classes [3]; and
- The reason why Java EE-related modules are no longer resolved by 
default [4].
- There are various other minor corrections and clarifications, as can 
be seen in the detailed diff [5].



Rgds,Rory

[1]http://mail.openjdk.java.net/pipermail/jigsaw-dev/2016-June/008227.html
[2]http://openjdk.java.net/jeps/261#ALL-DEFAULT
[3]http://openjdk.java.net/jeps/261#Class-loaders
[4]http://openjdk.java.net/jeps/261#EE-modules
[5]http://cr.openjdk.java.net/~mr/jigsaw/jeps/updates/261-2016-06-15.html 


--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA, Dublin,Ireland


--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



[jira] [Commented] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-06-27 Thread Benjamin Richter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351335#comment-15351335
 ] 

Benjamin Richter commented on SOLR-9256:


This could be caused by postgres limitations. Postgres only allows one open 
resultset per connection. Recent changes to JdbcDataSource.java optimized 
connection reusing and resultset closing. 

> Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple 
> joined entities
> -
>
> Key: SOLR-9256
> URL: https://issues.apache.org/jira/browse/SOLR-9256
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 6.0, 6.0.1, 6.1
> Environment: Solr 6.0, 6.0.1, 6.1 Single Instance or SolrCloud with 
> postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
>Reporter: Benjamin Richter
>
> h1. solr-data-config.xml
> {code:xml}
> 
>url="jdbc:postgresql://host:5432/database" user="username" 
> password="password" readOnly="true" autoCommit="false" />
>   
> 
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> cacheImpl="SortedMapBackedCache">
>   
>   
>   
>  cacheKey="a_id" cacheLookup="outer.id" 
> join="zipper">
>   
>   
>   
>   
> 
> {code}
> This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.
> Exception:
> org.apache.solr.handler.dataimport.DataImportHandlerException: 
> org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
> at 
> org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
> at 
> org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
> at 
> com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
> at 
> org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
> at 
> org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
> at 
> org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
> at 
> org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
> at 
> org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
> at 
> org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
> at 
> org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> 

[jira] [Commented] (SOLR-9254) TestGraphTermsQParserPlugin.testQueries() NullPointerException

2016-06-27 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351329#comment-15351329
 ] 

Joel Bernstein commented on SOLR-9254:
--

I'll take a look.

> TestGraphTermsQParserPlugin.testQueries() NullPointerException
> --
>
> Key: SOLR-9254
> URL: https://issues.apache.org/jira/browse/SOLR-9254
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>
> My Jenkins found a reproducing seed on branch_6x:
> {noformat}
> Checking out Revision d1a047ad6f24078f23c9b4adf15210ac8a6e8f8a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGraphTermsQParserPlugin -Dtests.method=testQueries 
> -Dtests.seed=E47472DC605D2D21 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr-Latn-ME -Dtests.timezone=America/Guadeloupe 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J11 | TestGraphTermsQParserPlugin.testQueries <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
> query
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E47472DC605D2D21:B8FABE077A34988F]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
>[junit4]>  at 
> org.apache.solr.search.TestGraphTermsQParserPlugin.testQueries(TestGraphTermsQParserPlugin.java:76)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.rewrite(GraphTermsQParserPlugin.java:223)
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.bulkScorer(GraphTermsQParserPlugin.java:252)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:261)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1818)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1635)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:644)
>[junit4]>  at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:528)
>[junit4]>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>[junit4]>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>[junit4]>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2035)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:310)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:292)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
>[junit4]>  ... 41 more
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {test_tl=PostingsFormat(name=Direct), _version_=BlockTreeOrds(blocksize=128), 
> test_ti=BlockTreeOrds(blocksize=128), term_s=PostingsFormat(name=Asserting), 
> test_tf=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> group_s=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
> maxPointsInLeafNode=1102, maxMBSortInHeap=5.004024995692577, 
> sim=ClassicSimilarity, locale=sr-Latn-ME, timezone=America/Guadeloupe
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=283609904,total=531628032
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9254) TestGraphTermsQParserPlugin.testQueries() NullPointerException

2016-06-27 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-9254:


Assignee: Joel Bernstein

> TestGraphTermsQParserPlugin.testQueries() NullPointerException
> --
>
> Key: SOLR-9254
> URL: https://issues.apache.org/jira/browse/SOLR-9254
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Joel Bernstein
>
> My Jenkins found a reproducing seed on branch_6x:
> {noformat}
> Checking out Revision d1a047ad6f24078f23c9b4adf15210ac8a6e8f8a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGraphTermsQParserPlugin -Dtests.method=testQueries 
> -Dtests.seed=E47472DC605D2D21 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr-Latn-ME -Dtests.timezone=America/Guadeloupe 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J11 | TestGraphTermsQParserPlugin.testQueries <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
> query
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E47472DC605D2D21:B8FABE077A34988F]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
>[junit4]>  at 
> org.apache.solr.search.TestGraphTermsQParserPlugin.testQueries(TestGraphTermsQParserPlugin.java:76)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.rewrite(GraphTermsQParserPlugin.java:223)
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.bulkScorer(GraphTermsQParserPlugin.java:252)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:261)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1818)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1635)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:644)
>[junit4]>  at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:528)
>[junit4]>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>[junit4]>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>[junit4]>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2035)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:310)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:292)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
>[junit4]>  ... 41 more
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {test_tl=PostingsFormat(name=Direct), _version_=BlockTreeOrds(blocksize=128), 
> test_ti=BlockTreeOrds(blocksize=128), term_s=PostingsFormat(name=Asserting), 
> test_tf=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> group_s=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
> maxPointsInLeafNode=1102, maxMBSortInHeap=5.004024995692577, 
> sim=ClassicSimilarity, locale=sr-Latn-ME, timezone=America/Guadeloupe
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=283609904,total=531628032
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-06-27 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351327#comment-15351327
 ] 

Robert Muir commented on LUCENE-7355:
-

OK, my other suggestion would be to default the implementation to 
keywordtokenizer. This is already what is happening today, and I feel this is 
corner case functionality, we shouldn't make it any harder to make a new 
analyzer?

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7355.patch, LUCENE-7355.patch
>
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7357) TestGeo3DPoint.testGeo3DRelations() failure: invalid bounds for shape=GeoStandardPath

2016-06-27 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351302#comment-15351302
 ] 

Karl Wright edited comment on LUCENE-7357 at 6/27/16 4:07 PM:
--

Here's a log of the planes being considered during bound computation, and the 
minimum/maximum X.

{code}
   [junit4]   2> computing X bound for plane [A=1.2246467991473535E-16, 
B=-1.0, C=0.0, D=-0.8967828076912789, side=-1.0]
   [junit4]   2> Point1: [X=-0.44499388270914614, Y=-0.8967828076912789, Z=0.0]
   [junit4]   2> Point2: [X=0.4449938827091463, Y=-0.8967828076912789, Z=0.0]
   [junit4]   2> computing X bound for plane [A=1.2246467991473535E-16, 
B=-1.0, C=0.0, D=0.8967828076912789, side=1.0]
   [junit4]   2> Point1: [X=-0.4449938827091463, Y=0.8967828076912789, Z=-0.0]
   [junit4]   2> Point2: [X=0.44499388270914614, Y=0.8967828076912789, Z=-0.0]
   [junit4]   2> computing X bound for plane [A=-2.18531083006635E-12, 
B=-2.676233913182802E-28, C=-1.0, D=0.0, side=1.0]
   [junit4]   2> Point1: [X=-1.0011188539924791, Y=5.894405855879941E-40, 
Z=2.1877558738133775E-12]
   [junit4]   2> Point2: [X=1.0011188539924791, Y=-5.894405855879941E-40, 
Z=-2.1877558738133775E-12]
   [junit4]   2> Point is outside of bound [A=0.0, B=-0.0, C=-1.0, D=0.0, 
side=-1.0]
   [junit4]   2> computing X bound for plane [A=0.0, B=-0.0, C=-1.0, D=0.0, 
side=-1.0]
   [junit4]   2> Point1: [X=-1.0011188539924791, Y=-0.0, Z=-0.0]
   [junit4]   2> Point2: [X=1.0011188539924791, Y=0.0, Z=0.0]
   [junit4]   2> Point is outside of bound [A=-2.18531083006635E-12, 
B=-2.676233913182802E-28, C=-1.0, D=0.0, side=1.0]
   [junit4]   2> computing X bound for plane [A=-0., 
B=-1.2380065887785788E-16, C=2.18531083006635E-12, D=0.44499388270914614, 
side=1.0]
   [junit4]   2> Point1: [X=0.44499387510170474, Y=7.458916684009455E-9, 
Z=-1.3078228192713674E-4]
   [junit4]   2> Point2: [X=0.4449938884797613, Y=-7.458916573828387E-9, 
Z=1.307822799952568E-4]
   [junit4]   2> computing X bound for plane [A=-0., 
B=-1.2380065887785788E-16, C=0.0, D=0.44499388270914614, side=1.0]
   [junit4]   2> Point1: [X=0.4449938664875679, Y=7.458916763861519E-9, Z=-0.0]
   [junit4]   2> Point2: [X=0.44499388655465266, Y=-7.458916653680447E-9, Z=0.0]
{code}

The bounding planes (on either side of the path) given min/max X of +/-0.44499. 
 Since the point with an X value of 0.45 manages to be considered in-set, it 
must be because of the end-point half-circles and the fact that these circles 
are truly ellipses, not circles.  Indeed, the question is really why the point 
is considered within the shape in the first place.  I'll have to look into that 
next.


was (Author: kwri...@metacarta.com):
Here's a log of the planes being considered during bound computation, and the 
minimum/maximum X.

{code}
   [junit4]   2> computing X bound for plane [A=1.2246467991473535E-16, 
B=-1.0, C=0.0, D=-0.8967828076912789, side=-1.0]
   [junit4]   2> Point1: [X=-0.44499388270914614, Y=-0.8967828076912789, Z=0.0]
   [junit4]   2> Point2: [X=0.4449938827091463, Y=-0.8967828076912789, Z=0.0]
   [junit4]   2> computing X bound for plane [A=1.2246467991473535E-16, 
B=-1.0, C=0.0, D=0.8967828076912789, side=1.0]
   [junit4]   2> Point1: [X=-0.4449938827091463, Y=0.8967828076912789, Z=-0.0]
   [junit4]   2> Point2: [X=0.44499388270914614, Y=0.8967828076912789, Z=-0.0]
   [junit4]   2> computing X bound for plane [A=-2.18531083006635E-12, 
B=-2.676233913182802E-28, C=-1.0, D=0.0, side=1.0]
   [junit4]   2> Point1: [X=-1.0011188539924791, Y=5.894405855879941E-40, 
Z=2.1877558738133775E-12]
   [junit4]   2> Point2: [X=1.0011188539924791, Y=-5.894405855879941E-40, 
Z=-2.1877558738133775E-12]
   [junit4]   2> Point is outside of bound [A=0.0, B=-0.0, C=-1.0, D=0.0, 
side=-1.0]
   [junit4]   2> computing X bound for plane [A=0.0, B=-0.0, C=-1.0, D=0.0, 
side=-1.0]
   [junit4]   2> Point1: [X=-1.0011188539924791, Y=-0.0, Z=-0.0]
   [junit4]   2> Point2: [X=1.0011188539924791, Y=0.0, Z=0.0]
   [junit4]   2> Point is outside of bound [A=-2.18531083006635E-12, 
B=-2.676233913182802E-28, C=-1.0, D=0.0, side=1.0]
   [junit4]   2> computing X bound for plane [A=-0., 
B=-1.2380065887785788E-16, C=2.18531083006635E-12, D=0.44499388270914614, 
side=1.0]
   [junit4]   2> Point1: [X=0.44499387510170474, Y=7.458916684009455E-9, 
Z=-1.3078228192713674E-4]
   [junit4]   2> Point2: [X=0.4449938884797613, Y=-7.458916573828387E-9, 
Z=1.307822799952568E-4]
   [junit4]   2> computing X bound for plane [A=-0., 
B=-1.2380065887785788E-16, C=0.0, D=0.44499388270914614, side=1.0]
   [junit4]   2> Point1: [X=0.4449938664875679, Y=7.458916763861519E-9, Z=-0.0]
   [junit4]   2> Point2: [X=0.44499388655465266, Y=-7.458916653680447E-9, Z=0.0]
{code}

The bounding planes (on either side of the path) given min/max X of +/-0.44499. 

[jira] [Comment Edited] (LUCENE-7357) TestGeo3DPoint.testGeo3DRelations() failure: invalid bounds for shape=GeoStandardPath

2016-06-27 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351302#comment-15351302
 ] 

Karl Wright edited comment on LUCENE-7357 at 6/27/16 4:05 PM:
--

Here's a log of the planes being considered during bound computation, and the 
minimum/maximum X.

{code}
   [junit4]   2> computing X bound for plane [A=1.2246467991473535E-16, 
B=-1.0, C=0.0, D=-0.8967828076912789, side=-1.0]
   [junit4]   2> Point1: [X=-0.44499388270914614, Y=-0.8967828076912789, Z=0.0]
   [junit4]   2> Point2: [X=0.4449938827091463, Y=-0.8967828076912789, Z=0.0]
   [junit4]   2> computing X bound for plane [A=1.2246467991473535E-16, 
B=-1.0, C=0.0, D=0.8967828076912789, side=1.0]
   [junit4]   2> Point1: [X=-0.4449938827091463, Y=0.8967828076912789, Z=-0.0]
   [junit4]   2> Point2: [X=0.44499388270914614, Y=0.8967828076912789, Z=-0.0]
   [junit4]   2> computing X bound for plane [A=-2.18531083006635E-12, 
B=-2.676233913182802E-28, C=-1.0, D=0.0, side=1.0]
   [junit4]   2> Point1: [X=-1.0011188539924791, Y=5.894405855879941E-40, 
Z=2.1877558738133775E-12]
   [junit4]   2> Point2: [X=1.0011188539924791, Y=-5.894405855879941E-40, 
Z=-2.1877558738133775E-12]
   [junit4]   2> Point is outside of bound [A=0.0, B=-0.0, C=-1.0, D=0.0, 
side=-1.0]
   [junit4]   2> computing X bound for plane [A=0.0, B=-0.0, C=-1.0, D=0.0, 
side=-1.0]
   [junit4]   2> Point1: [X=-1.0011188539924791, Y=-0.0, Z=-0.0]
   [junit4]   2> Point2: [X=1.0011188539924791, Y=0.0, Z=0.0]
   [junit4]   2> Point is outside of bound [A=-2.18531083006635E-12, 
B=-2.676233913182802E-28, C=-1.0, D=0.0, side=1.0]
   [junit4]   2> computing X bound for plane [A=-0., 
B=-1.2380065887785788E-16, C=2.18531083006635E-12, D=0.44499388270914614, 
side=1.0]
   [junit4]   2> Point1: [X=0.44499387510170474, Y=7.458916684009455E-9, 
Z=-1.3078228192713674E-4]
   [junit4]   2> Point2: [X=0.4449938884797613, Y=-7.458916573828387E-9, 
Z=1.307822799952568E-4]
   [junit4]   2> computing X bound for plane [A=-0., 
B=-1.2380065887785788E-16, C=0.0, D=0.44499388270914614, side=1.0]
   [junit4]   2> Point1: [X=0.4449938664875679, Y=7.458916763861519E-9, Z=-0.0]
   [junit4]   2> Point2: [X=0.44499388655465266, Y=-7.458916653680447E-9, Z=0.0]
{code}

The bounding planes (on either side of the path) given min/max X of +/-0.44499. 
 Since the point with an X value of 0.45 manages to be considered in-set, it 
must be because of the end-point half-circles and the fact that these circles 
are truly ellipses, not circles.  


was (Author: kwri...@metacarta.com):
Here's a log of the planes being considered during bound computation, and the 
minimum/maximum X.

{code}
   [junit4]   2> computing X bound for plane [A=1.2246467991473535E-16, 
B=-1.0, C=0.0, D=-0.8967828076912789, side=-1.0]
   [junit4]   2> Point1: [X=-0.44499388270914614, Y=-0.8967828076912789, Z=0.0]
   [junit4]   2> Point2: [X=0.4449938827091463, Y=-0.8967828076912789, Z=0.0]
   [junit4]   2> computing X bound for plane [A=1.2246467991473535E-16, 
B=-1.0, C=0.0, D=0.8967828076912789, side=1.0]
   [junit4]   2> Point1: [X=-0.4449938827091463, Y=0.8967828076912789, Z=-0.0]
   [junit4]   2> Point2: [X=0.44499388270914614, Y=0.8967828076912789, Z=-0.0]
   [junit4]   2> computing X bound for plane [A=-2.18531083006635E-12, 
B=-2.676233913182802E-28, C=-1.0, D=0.0, side=1.0]
   [junit4]   2> Point1: [X=-1.0011188539924791, Y=5.894405855879941E-40, 
Z=2.1877558738133775E-12]
   [junit4]   2> Point2: [X=1.0011188539924791, Y=-5.894405855879941E-40, 
Z=-2.1877558738133775E-12]
   [junit4]   2> Point is outside of bound [A=0.0, B=-0.0, C=-1.0, D=0.0, 
side=-1.0]
   [junit4]   2> computing X bound for plane [A=0.0, B=-0.0, C=-1.0, D=0.0, 
side=-1.0]
   [junit4]   2> Point1: [X=-1.0011188539924791, Y=-0.0, Z=-0.0]
   [junit4]   2> Point2: [X=1.0011188539924791, Y=0.0, Z=0.0]
   [junit4]   2> Point is outside of bound [A=-2.18531083006635E-12, 
B=-2.676233913182802E-28, C=-1.0, D=0.0, side=1.0]
   [junit4]   2> computing X bound for plane [A=-0., 
B=-1.2380065887785788E-16, C=2.18531083006635E-12, D=0.44499388270914614, 
side=1.0]
   [junit4]   2> Point1: [X=0.44499387510170474, Y=7.458916684009455E-9, 
Z=-1.3078228192713674E-4]
   [junit4]   2> Point2: [X=0.4449938884797613, Y=-7.458916573828387E-9, 
Z=1.307822799952568E-4]
   [junit4]   2> computing X bound for plane [A=-0., 
B=-1.2380065887785788E-16, C=0.0, D=0.44499388270914614, side=1.0]
   [junit4]   2> Point1: [X=0.4449938664875679, Y=7.458916763861519E-9, Z=-0.0]
   [junit4]   2> Point2: [X=0.44499388655465266, Y=-7.458916653680447E-9, Z=0.0]
{code}



> TestGeo3DPoint.testGeo3DRelations() failure: invalid bounds for 
> shape=GeoStandardPath
> -
>
> Key: 

[jira] [Updated] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-06-27 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7355:
-
Attachment: LUCENE-7355.patch

Thanks for having a look. Does it look better this way? I also made Analyzer 
hold 2 {{storedValue}} s to make ReusableStrategy less complicated.

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7355.patch, LUCENE-7355.patch
>
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7357) TestGeo3DPoint.testGeo3DRelations() failure: invalid bounds for shape=GeoStandardPath

2016-06-27 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351302#comment-15351302
 ] 

Karl Wright commented on LUCENE-7357:
-

Here's a log of the planes being considered during bound computation, and the 
minimum/maximum X.

{code}
   [junit4]   2> computing X bound for plane [A=1.2246467991473535E-16, 
B=-1.0, C=0.0, D=-0.8967828076912789, side=-1.0]
   [junit4]   2> Point1: [X=-0.44499388270914614, Y=-0.8967828076912789, Z=0.0]
   [junit4]   2> Point2: [X=0.4449938827091463, Y=-0.8967828076912789, Z=0.0]
   [junit4]   2> computing X bound for plane [A=1.2246467991473535E-16, 
B=-1.0, C=0.0, D=0.8967828076912789, side=1.0]
   [junit4]   2> Point1: [X=-0.4449938827091463, Y=0.8967828076912789, Z=-0.0]
   [junit4]   2> Point2: [X=0.44499388270914614, Y=0.8967828076912789, Z=-0.0]
   [junit4]   2> computing X bound for plane [A=-2.18531083006635E-12, 
B=-2.676233913182802E-28, C=-1.0, D=0.0, side=1.0]
   [junit4]   2> Point1: [X=-1.0011188539924791, Y=5.894405855879941E-40, 
Z=2.1877558738133775E-12]
   [junit4]   2> Point2: [X=1.0011188539924791, Y=-5.894405855879941E-40, 
Z=-2.1877558738133775E-12]
   [junit4]   2> Point is outside of bound [A=0.0, B=-0.0, C=-1.0, D=0.0, 
side=-1.0]
   [junit4]   2> computing X bound for plane [A=0.0, B=-0.0, C=-1.0, D=0.0, 
side=-1.0]
   [junit4]   2> Point1: [X=-1.0011188539924791, Y=-0.0, Z=-0.0]
   [junit4]   2> Point2: [X=1.0011188539924791, Y=0.0, Z=0.0]
   [junit4]   2> Point is outside of bound [A=-2.18531083006635E-12, 
B=-2.676233913182802E-28, C=-1.0, D=0.0, side=1.0]
   [junit4]   2> computing X bound for plane [A=-0., 
B=-1.2380065887785788E-16, C=2.18531083006635E-12, D=0.44499388270914614, 
side=1.0]
   [junit4]   2> Point1: [X=0.44499387510170474, Y=7.458916684009455E-9, 
Z=-1.3078228192713674E-4]
   [junit4]   2> Point2: [X=0.4449938884797613, Y=-7.458916573828387E-9, 
Z=1.307822799952568E-4]
   [junit4]   2> computing X bound for plane [A=-0., 
B=-1.2380065887785788E-16, C=0.0, D=0.44499388270914614, side=1.0]
   [junit4]   2> Point1: [X=0.4449938664875679, Y=7.458916763861519E-9, Z=-0.0]
   [junit4]   2> Point2: [X=0.44499388655465266, Y=-7.458916653680447E-9, Z=0.0]
{code}



> TestGeo3DPoint.testGeo3DRelations() failure: invalid bounds for 
> shape=GeoStandardPath
> -
>
> Key: LUCENE-7357
> URL: https://issues.apache.org/jira/browse/LUCENE-7357
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Steve Rowe
>Assignee: Karl Wright
>
> From [https://builds.apache.org/job/Lucene-Solr-Tests-master/1228/]:
> {noformat}
> Checking out Revision 46c827e31a5534bb032d0803318d01309bf0195c 
> (refs/remotes/origin/master)
> [...]
>   [junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
>   [junit4]   1> doc=1544 is contained by shape but is outside the 
> returned XYZBounds
>   [junit4]   1>   unquantized=[lat=-2.848117399637174E-91, 
> lon=-1.1092122135274942([X=0.44586529864043345, Y=-0.8963498732568058, 
> Z=-2.851304027160807E-91])]
>   [junit4]   1>   quantized=[X=0.44586529870253566, 
> Y=-0.8963498734280969, Z=-2.3309121299774915E-10]
>   [junit4]   1>   shape=GeoStandardPath: {planetmodel=PlanetModel.WGS84, 
> width=1.117010721276371(64.0), points={[[lat=2.18531083006635E-12, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=2.187755873813378E-12])], [lat=0.0, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=0.0])]]}}
>   [junit4]   1>   bounds=XYZBounds: [xmin=-1.0011188549924792 
> xmax=0.4449938894797613 ymin=-1.0011188549924792 ymax=1.0011188549924792 
> zmin=-0.9977622930221051 zmax=0.9977622930221051]
>   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
> -Dtests.method=testGeo3DRelations -Dtests.seed=1F71744AE2101863 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=pt-PT 
> -Dtests.timezone=Europe/Berlin -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>   [junit4] FAILURE 1.46s J1 | TestGeo3DPoint.testGeo3DRelations <<<
>   [junit4]> Throwable #1: java.lang.AssertionError: invalid bounds for 
> shape=GeoStandardPath: {planetmodel=PlanetModel.WGS84, 
> width=1.117010721276371(64.0), points={[[lat=2.18531083006635E-12, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=2.187755873813378E-12])], [lat=0.0, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=0.0])]]}}
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([1F71744AE2101863:AF0E09DE6D5DB6FF]:0)
>   [junit4]>   at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:259)
>   [junit4]>   at 

[jira] [Created] (SOLR-9256) Solr 6.x DataImportHandler fails with postgreSQL dataSource with multiple joined entities

2016-06-27 Thread Benjamin Richter (JIRA)
Benjamin Richter created SOLR-9256:
--

 Summary: Solr 6.x DataImportHandler fails with postgreSQL 
dataSource with multiple joined entities
 Key: SOLR-9256
 URL: https://issues.apache.org/jira/browse/SOLR-9256
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: update
Affects Versions: 6.1, 6.0.1, 6.0
 Environment: Solr 6.0, 6.0.1, 6.1 Single Instance or SolrCloud with 
postgreSQL 9.4 Server on Java Version 1.8.0_91 runtime
Reporter: Benjamin Richter


h1. solr-data-config.xml
{code:xml}

  
  













  

{code}

This works up to SOLR 5.5.2 (latest 5.x) but fails in SOLR 6.x.

Exception:

org.apache.solr.handler.dataimport.DataImportHandlerException: 
org.postgresql.util.PSQLException: Dieses ResultSet ist geschlossen.
at 
org.apache.solr.handler.dataimport.DataImportHandlerException.wrapAndThrow(DataImportHandlerException.java:61)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator.hasnext(JdbcDataSource.java:434)
at 
org.apache.solr.handler.dataimport.JdbcDataSource$ResultSetIterator$1.hasNext(JdbcDataSource.java:350)
at 
com.google.common.collect.Iterators$PeekingImpl.hasNext(Iterators.java:1216)
at 
org.apache.solr.handler.dataimport.Zipper.supplyNextChild(Zipper.java:65)
at 
org.apache.solr.handler.dataimport.EntityProcessorBase.getNext(EntityProcessorBase.java:127)
at 
org.apache.solr.handler.dataimport.SqlEntityProcessor.nextRow(SqlEntityProcessor.java:75)
at 
org.apache.solr.handler.dataimport.EntityProcessorWrapper.nextRow(EntityProcessorWrapper.java:244)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:475)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:514)
at 
org.apache.solr.handler.dataimport.DocBuilder.buildDocument(DocBuilder.java:414)
at 
org.apache.solr.handler.dataimport.DocBuilder.doFullDump(DocBuilder.java:329)
at 
org.apache.solr.handler.dataimport.DocBuilder.execute(DocBuilder.java:232)
at 
org.apache.solr.handler.dataimport.DataImporter.doFullImport(DataImporter.java:416)
at 
org.apache.solr.handler.dataimport.DataImporter.runCmd(DataImporter.java:480)
at 
org.apache.solr.handler.dataimport.DataImportHandler.handleRequestBody(DataImportHandler.java:200)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2053)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:518)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at 

[jira] [Commented] (SOLR-9253) solrcloud goes dowm

2016-06-27 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351250#comment-15351250
 ] 

Erick Erickson commented on SOLR-9253:
--

Please raise questions like on the user's list first, we try to reserve JIRA 
entries for known code issues. On the surface this appears to be a usage 
question.

When you do ping the user's list you need to be much more specific. _What_ 
fails? You say "SolrCloud goes down". Crashes? If so what's in the error log? 
Locks up? Hits an OOM error? The relevant parts of the solr log file are 
important for those questions. 

What do you mean "switch to SolrCloud"? Exactly _how_ are you doing that switch?



> solrcloud goes dowm
> ---
>
> Key: SOLR-9253
> URL: https://issues.apache.org/jira/browse/SOLR-9253
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 4.9.1
> Environment: jboss, zookeeper
>Reporter: Junfeng Mu
> Attachments: 20160627161845.png, javacore.165.txt
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> We use solrcloud in our project. now we use solr, but the data grows bigger 
> and bigger, so we want to switch to solrcloud, however, once we switch to 
> solrcloud, solrcloud goes down, It seems that solrcloud blocked, can not deal 
> with the new query, please see the attachments and help us ASAP. Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-06-27 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351238#comment-15351238
 ] 

Robert Muir commented on LUCENE-7355:
-

Instead of passing a boolean to createComponents, can we just have a separate 
method? This would avoid lots of if-then-else logic (which is ripe for bugs). 

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7355.patch
>
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-06-27 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7355:
-
Attachment: LUCENE-7355.patch

Here is what the above plan would look like on 
Analyzer/StandardAnalyzer/CustomAnalyzer. Please comment if you do not like the 
idea or if you have suggestions as it would take time to update all analyzers.

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7355.patch
>
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9255) Start Script Basic Authentication

2016-06-27 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351192#comment-15351192
 ] 

Ishan Chattopadhyaya commented on SOLR-9255:


Martin, can you please share the relevant parts of the bin/solr.in.sh that you 
said worked with 6.0.1 but not with 6.1?

> Start Script Basic Authentication
> -
>
> Key: SOLR-9255
> URL: https://issues.apache.org/jira/browse/SOLR-9255
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 6.1
>Reporter: Martin Löper
>
> I configured SSL and BasicAuthentication with Rule-Based-Authorization.
> I noticed that since the latest changes from 6.0.1 to 6.1.0 I cannot pass the 
> Basic Authentication Credentials to the Solr Start Script anymore. For the 
> previous release I did this via the bin/solr.in.sh shellscript.
> What has happened with the SOLR_AUTHENTICATION_CLIENT_CONFIGURER and 
> SOLR_AUTHENTICATION_OPTS parameters? Are they still in use or is there a new 
> way to pass basic auth credentials on the command-line?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9255) Start Script Basic Authentication

2016-06-27 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351187#comment-15351187
 ] 

Ishan Chattopadhyaya commented on SOLR-9255:


Nothing has changed in terms of design that would imply 
AUTHENTICATION_CLIENT_CONFIGURER and SOLR_AUTHENTICATION_OPTS will not work. 
Maybe some bug has crept in.

FWIW, Basic auth username/password can be passed via bin/solr script as per 
SOLR-8048. However, I think this is master only and not released yet.



> Start Script Basic Authentication
> -
>
> Key: SOLR-9255
> URL: https://issues.apache.org/jira/browse/SOLR-9255
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 6.1
>Reporter: Martin Löper
>
> I configured SSL and BasicAuthentication with Rule-Based-Authorization.
> I noticed that since the latest changes from 6.0.1 to 6.1.0 I cannot pass the 
> Basic Authentication Credentials to the Solr Start Script anymore. For the 
> previous release I did this via the bin/solr.in.sh shellscript.
> What has happened with the SOLR_AUTHENTICATION_CLIENT_CONFIGURER and 
> SOLR_AUTHENTICATION_OPTS parameters? Are they still in use or is there a new 
> way to pass basic auth credentials on the command-line?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7355) Leverage MultiTermAwareComponent in query parsers

2016-06-27 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7355?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351156#comment-15351156
 ] 

Adrien Grand commented on LUCENE-7355:
--

I propose the following plan:
 - add {{TokenStream tokenStreamMultiTerm(String fieldName, String text)}} to 
{{Analyzer}}.
 - change {{Analyzer.createComponents}} to take an additional boolean 
{{multiTerm}} parameter to know which parts of the analysis chain it should use 
when analyzing multi-term queries. For instance, the standard analyzer would 
apply a keyword tokenizer rather than a standard tokenizer, and only apply the 
standard and lowercase filters (no stop words). CustomAnalyzer would only apply 
the factories that implement {{MultiTermAwareComponent}} and pass them through 
{{MultiTermAwareComponent.getMultiTermComponent()}}.
 - change query parsers to call {{tokenStreamMultiTerm}} rather than 
{{tokenStream}} when analyzing text for wildcard, regexp or fuzzy queries.

> Leverage MultiTermAwareComponent in query parsers
> -
>
> Key: LUCENE-7355
> URL: https://issues.apache.org/jira/browse/LUCENE-7355
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> MultiTermAwareComponent is designed to make it possible to do the right thing 
> in query parsers when in comes to analysis of multi-term queries. However, 
> since query parsers just take an analyzer and since analyzers do not 
> propagate the information about what to do for multi-term analysis, query 
> parsers cannot do the right thing out of the box.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 103 - Still Failing

2016-06-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/103/

All tests passed

Build Log:
[...truncated 11372 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/solr/build/solr-core/test/temp/junit4-J1-20160627_125320_240.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] Dumping heap to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/heapdumps/java_pid10625.hprof
 ...
   [junit4] Heap dump file created [613705179 bytes in 4.597 secs]
   [junit4] <<< JVM J1: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/solr/build/solr-core/test/temp/junit4-J1-20160627_125320_240.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] WARN: Unhandled exception in event serialization. -> 
java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] <<< JVM J1: EOF 

[...truncated 1381 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/x1/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8/jre/bin/java 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/heapdumps
 -ea -esa -Dtests.prefix=tests -Dtests.seed=EEB6DF1F980385E2 -Xmx512M 
-Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random 
-Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.luceneMatchVersion=6.2.0 -Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/lucene/tools/junit4/logging.properties
 -Dtests.nightly=true -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=2 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/solr/build/solr-core/test/temp
 
-Dcommon.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/lucene
 
-Dclover.db.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/lucene/build/clover/db
 
-Djava.security.policy=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/lucene/tools/junit4/solr-tests.policy
 -Dtests.LUCENE_VERSION=6.2.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Djunit4.childvm.cwd=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/solr/build/solr-core/test/J1
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=ISO-8859-1 -classpath 

[jira] [Created] (SOLR-9255) Start Script Basic Authentication

2016-06-27 Thread JIRA
Martin Löper created SOLR-9255:
--

 Summary: Start Script Basic Authentication
 Key: SOLR-9255
 URL: https://issues.apache.org/jira/browse/SOLR-9255
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Authentication
Affects Versions: 6.1
Reporter: Martin Löper


I configured SSL and BasicAuthentication with Rule-Based-Authorization.
I noticed that since the latest changes from 6.0.1 to 6.1.0 I cannot pass the 
Basic Authentication Credentials to the Solr Start Script anymore. For the 
previous release I did this via the bin/solr.in.sh shellscript.

What has happened with the SOLR_AUTHENTICATION_CLIENT_CONFIGURER and 
SOLR_AUTHENTICATION_OPTS parameters? Are they still in use or is there a new 
way to pass basic auth credentials on the command-line?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7351) BKDWriter should compress doc ids when all values in a block are the same

2016-06-27 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7351:
-
Attachment: LUCENE-7351.patch

I have been experimenting with the attached patch, which compresses doc ids 
based on the number of required bytes to store them (it only specializes 8, 16, 
24 and 32 bits per doc id) and also adds delta-compression for blocks whose 
values are all the same. The IndexAndSearchOpenStreetMaps reported a slow down 
of 1.7% for the box benchmark (72.3 QPS -> 71.1 QPS) but storage requirements 
decreased by 9.1% (635MB -> 577MB). The storage requirements improve even more 
with types that require fewer bytes (LatLonPoint requires 8 bytes per value). 
For instance indexing 10M random half floats with the patch requires 28MB vs 
43MB on master (-35%).

> BKDWriter should compress doc ids when all values in a block are the same
> -
>
> Key: LUCENE-7351
> URL: https://issues.apache.org/jira/browse/LUCENE-7351
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7351.patch
>
>
> BKDWriter writes doc ids using 4 bytes per document. I think it should 
> compress similarly to postings when all docs in a block have the same packed 
> value. This can happen either when a field has a default value which is 
> common across documents or when quantization makes the number of unique 
> values so small that a large index will necessarily have blocks that all 
> contain the same value (eg. there are only 63490 unique half-float values).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7356) SearchGroup tweaks

2016-06-27 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351059#comment-15351059
 ] 

Adrien Grand commented on LUCENE-7356:
--

+1

> SearchGroup tweaks
> --
>
> Key: LUCENE-7356
> URL: https://issues.apache.org/jira/browse/LUCENE-7356
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7356.patch
>
>
> * initialCapacity
> * size()==0 vs. isEmpty()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7357) TestGeo3DPoint.testGeo3DRelations() failure: invalid bounds for shape=GeoStandardPath

2016-06-27 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15351034#comment-15351034
 ] 

Karl Wright commented on LUCENE-7357:
-

This path is extremely short path that is solely in latitude, length 2.18e-12:

{code}
   [lat=2.18531083006635E-12, lon=-3.141592653589793([X=-1.0011188539924791, 
Y=-1.226017000107956E-16, Z=2.187755873813378E-12])], 
   [lat=0.0, lon=-3.141592653589793([X=-1.0011188539924791, 
Y=-1.226017000107956E-16, Z=0.0])]]}}
{code}

The bounds that is violated is in X:

{code}
   xmin=-1.0011188549924792 xmax=0.4449938894797613
{code}

The point that is outside this bound but inside the shape is:

{code}
   [junit4]   1>   unquantized=[lat=-2.848117399637174E-91, 
lon=-1.1092122135274942([X=0.44586529864043345, Y=-0.8963498732568058, 
Z=-2.851304027160807E-91])]
   [junit4]   1>   quantized=[X=0.44586529870253566, Y=-0.8963498734280969, 
Z=-2.3309121299774915E-10]
{code}

Not clear why the x-bound computation is off here;  will have to analyze how 
that's being done and look for numerical instability.  It's possible that the 
issue occurs because of the approximations that must be made for endpoint 
circles for paths (which are really ellipses).

> TestGeo3DPoint.testGeo3DRelations() failure: invalid bounds for 
> shape=GeoStandardPath
> -
>
> Key: LUCENE-7357
> URL: https://issues.apache.org/jira/browse/LUCENE-7357
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Steve Rowe
>Assignee: Karl Wright
>
> From [https://builds.apache.org/job/Lucene-Solr-Tests-master/1228/]:
> {noformat}
> Checking out Revision 46c827e31a5534bb032d0803318d01309bf0195c 
> (refs/remotes/origin/master)
> [...]
>   [junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
>   [junit4]   1> doc=1544 is contained by shape but is outside the 
> returned XYZBounds
>   [junit4]   1>   unquantized=[lat=-2.848117399637174E-91, 
> lon=-1.1092122135274942([X=0.44586529864043345, Y=-0.8963498732568058, 
> Z=-2.851304027160807E-91])]
>   [junit4]   1>   quantized=[X=0.44586529870253566, 
> Y=-0.8963498734280969, Z=-2.3309121299774915E-10]
>   [junit4]   1>   shape=GeoStandardPath: {planetmodel=PlanetModel.WGS84, 
> width=1.117010721276371(64.0), points={[[lat=2.18531083006635E-12, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=2.187755873813378E-12])], [lat=0.0, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=0.0])]]}}
>   [junit4]   1>   bounds=XYZBounds: [xmin=-1.0011188549924792 
> xmax=0.4449938894797613 ymin=-1.0011188549924792 ymax=1.0011188549924792 
> zmin=-0.9977622930221051 zmax=0.9977622930221051]
>   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
> -Dtests.method=testGeo3DRelations -Dtests.seed=1F71744AE2101863 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=pt-PT 
> -Dtests.timezone=Europe/Berlin -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>   [junit4] FAILURE 1.46s J1 | TestGeo3DPoint.testGeo3DRelations <<<
>   [junit4]> Throwable #1: java.lang.AssertionError: invalid bounds for 
> shape=GeoStandardPath: {planetmodel=PlanetModel.WGS84, 
> width=1.117010721276371(64.0), points={[[lat=2.18531083006635E-12, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=2.187755873813378E-12])], [lat=0.0, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=0.0])]]}}
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([1F71744AE2101863:AF0E09DE6D5DB6FF]:0)
>   [junit4]>   at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:259)
>   [junit4]>   at java.lang.Thread.run(Thread.java:745)
>   [junit4] IGNOR/A 0.00s J1 | TestGeo3DPoint.testRandomBig
>   [junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
>   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62), 
> sim=RandomSimilarity(queryNorm=false,coord=yes): {}, locale=pt-PT, 
> timezone=Europe/Berlin
>   [junit4]   2> NOTE: Linux 3.13.0-85-generic amd64/Oracle Corporation 
> 1.8.0_74 (64-bit)/cpus=4,threads=1,free=256210224,total=354418688
>   [junit4]   2> NOTE: All tests run in this JVM: [TestGeo3DPoint]
>   [junit4] Completed [10/11 (1!)] on J1 in 37.22s, 14 tests, 1 failure, 1 
> skipped <<< FAILURES!
> {noformat}
> Reproduces for me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9088) solr.schema.TestManagedSchemaAPI.test failures ([doc=2] unknown field 'myNewField1')

2016-06-27 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-9088:

Attachment: SOLR-9088.patch

Sample log excerpt from a failure

{code}
[junit4]   2> 1995579 INFO  
(zkCallback-22745-thread-2-processing-n:127.0.0.1:39653_solr) 
[n:127.0.0.1:39653_solr] o.a.s.s.ZkIndexSchemaReader A schema change: 
WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/configs/conf1/managed-schema, has occurred - updating schema from 
ZooKeeper ...
   [junit4]   2> 1995580 INFO  
(zkCallback-22746-thread-3-processing-n:127.0.0.1:38103_solr) 
[n:127.0.0.1:38103_solr] o.a.s.s.ZkIndexSchemaReader A schema change: 
WatchedEvent state:SyncConnected type:NodeDataChanged 
path:/configs/conf1/managed-schema, has occurred - updating schema from 
ZooKeeper ...
   [junit4]   2> 1995580 INFO  (qtp694202271-87996) [n:127.0.0.1:38103_solr 
c:testschemaapi s:shard1 r:core_node1 x:testschemaapi_shard1_replica1] 
o.a.s.s.ManagedIndexSchema Waiting up to 599 secs for 1 replicas to apply 
schema update version 1 for collection testschemaapi
   [junit4]   2> 1995580 INFO  
(zkCallback-22745-thread-2-processing-n:127.0.0.1:39653_solr) 
[n:127.0.0.1:39653_solr] o.a.s.s.ZkIndexSchemaReader Retrieved schema 
version 2 from ZooKeeper
   [junit4]   2> 1995582 INFO  (qtp178914546-87997) [n:127.0.0.1:39653_solr 
c:testschemaapi s:shard1 r:core_node2 x:testschemaapi_shard1_replica2] 
o.a.s.c.S.Request [testschemaapi_shard1_replica2]  webapp=/solr 
path=/schema/zkversion params={refreshIfBelowVersion=1=2=javabin} 
status=0 QTime=0
   [junit4]   2> 1995583 INFO  (qtp694202271-87996) [n:127.0.0.1:38103_solr 
c:testschemaapi s:shard1 r:core_node1 x:testschemaapi_shard1_replica1] 
o.a.s.s.ManagedIndexSchema Took 2.0ms for 1 replicas to apply schema update 
version 1 for collection testschemaapi
   [junit4]   2> 1995583 INFO  (qtp694202271-87996) [n:127.0.0.1:38103_solr 
c:testschemaapi s:shard1 r:core_node1 x:testschemaapi_shard1_replica1] 
o.a.s.c.S.Request [testschemaapi_shard1_replica1]  webapp=/solr path=/schema 
params={version=2=javabin} status=0 QTime=16
   [junit4]   2> 1995583 INFO  
(zkCallback-22745-thread-2-processing-n:127.0.0.1:39653_solr) 
[n:127.0.0.1:39653_solr] o.a.s.s.IndexSchema 
[testschemaapi_shard1_replica2] Schema name=minimal
   [junit4]   2> 1995622 INFO  
(TEST-TestManagedSchemaAPI.test-seed#[79C11866C0E16F74]) [] 
o.a.s.s.TestManagedSchemaAPI added new field=myNewField1
   [junit4]   2> 1995622 INFO  
(zkCallback-22746-thread-3-processing-n:127.0.0.1:38103_solr) 
[n:127.0.0.1:38103_solr] o.a.s.s.ZkIndexSchemaReader Retrieved schema 
version 2 from ZooKeeper
   [junit4]   2> 1995622 INFO  (Thread-5662) [n:127.0.0.1:38103_solr] 
o.a.s.c.ZkController Running listeners for /configs/conf1
   [junit4]   2> 1995622 INFO  (Thread-5662) [n:127.0.0.1:38103_solr] 
o.a.s.c.SolrCore config update listener called for core 
testschemaapi_shard1_replica1
   [junit4]   2> 1995623 INFO  (Thread-5662) [n:127.0.0.1:38103_solr] 
o.a.s.c.SolrConfig current version of requestparams : -1
   [junit4]   2> 1995624 INFO  (Thread-5662) [n:127.0.0.1:38103_solr] 
o.a.s.c.SolrCore /configs/conf1/managed-schema is stale will need an update 
from 1 to 2
   [junit4]   2> 1995624 INFO  (Thread-5662) [n:127.0.0.1:38103_solr] 
o.a.s.c.SolrCore core reload testschemaapi_shard1_replica1
   [junit4]   2> 1995624 INFO  (Thread-5662) [n:127.0.0.1:38103_solr] 
o.a.s.c.ZkController Check for collection zkNode:testschemaapi
   [junit4]   2> 1995624 INFO  (Thread-5662) [n:127.0.0.1:38103_solr] 
o.a.s.c.ZkController Collection zkNode exists
   [junit4]   2> 1995624 INFO  (Thread-5662) [n:127.0.0.1:38103_solr] 
o.a.s.c.c.ZkStateReader Load collection config from: 
[/collections/testschemaapi]
   [junit4]   2> 1995624 INFO  (Thread-5663) [n:127.0.0.1:39653_solr] 
o.a.s.c.ZkController Running listeners for /configs/conf1
   [junit4]   2> 1995624 INFO  (Thread-5663) [n:127.0.0.1:39653_solr] 
o.a.s.c.SolrCore config update listener called for core 
testschemaapi_shard1_replica2
   [junit4]   2> 1995625 INFO  (Thread-5662) [n:127.0.0.1:38103_solr] 
o.a.s.c.c.ZkStateReader path=[/collections/testschemaapi] [configName]=[conf1] 
specified config exists in ZooKeeper
   [junit4]   2> 1995625 INFO  (Thread-5662) [n:127.0.0.1:38103_solr] 
o.a.s.c.SolrResourceLoader new SolrResourceLoader for directory: 
'/home/jenkins/workspace/Lucene-Solr-5.5-Linux/solr/build/solr-core/test/J2/temp/solr.schema.TestManagedSchemaAPI_79C11866C0E16F74-001/tempDir-001/node1/testschemaapi_shard1_replica1'
   [junit4]   2> 1995626 INFO  (Thread-5662) [n:127.0.0.1:38103_solr] 
o.a.s.c.SolrResourceLoader JNDI not configured for solr (NoInitialContextEx)
   [junit4]   2> 1995626 INFO  (Thread-5662) [n:127.0.0.1:38103_solr] 
o.a.s.c.SolrResourceLoader solr home defaulted to 'solr/' (could not find 

[jira] [Assigned] (LUCENE-7357) TestGeo3DPoint.testGeo3DRelations() failure: invalid bounds for shape=GeoStandardPath

2016-06-27 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7357?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright reassigned LUCENE-7357:
---

Assignee: Karl Wright

> TestGeo3DPoint.testGeo3DRelations() failure: invalid bounds for 
> shape=GeoStandardPath
> -
>
> Key: LUCENE-7357
> URL: https://issues.apache.org/jira/browse/LUCENE-7357
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/spatial3d
>Reporter: Steve Rowe
>Assignee: Karl Wright
>
> From [https://builds.apache.org/job/Lucene-Solr-Tests-master/1228/]:
> {noformat}
> Checking out Revision 46c827e31a5534bb032d0803318d01309bf0195c 
> (refs/remotes/origin/master)
> [...]
>   [junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
>   [junit4]   1> doc=1544 is contained by shape but is outside the 
> returned XYZBounds
>   [junit4]   1>   unquantized=[lat=-2.848117399637174E-91, 
> lon=-1.1092122135274942([X=0.44586529864043345, Y=-0.8963498732568058, 
> Z=-2.851304027160807E-91])]
>   [junit4]   1>   quantized=[X=0.44586529870253566, 
> Y=-0.8963498734280969, Z=-2.3309121299774915E-10]
>   [junit4]   1>   shape=GeoStandardPath: {planetmodel=PlanetModel.WGS84, 
> width=1.117010721276371(64.0), points={[[lat=2.18531083006635E-12, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=2.187755873813378E-12])], [lat=0.0, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=0.0])]]}}
>   [junit4]   1>   bounds=XYZBounds: [xmin=-1.0011188549924792 
> xmax=0.4449938894797613 ymin=-1.0011188549924792 ymax=1.0011188549924792 
> zmin=-0.9977622930221051 zmax=0.9977622930221051]
>   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
> -Dtests.method=testGeo3DRelations -Dtests.seed=1F71744AE2101863 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=pt-PT 
> -Dtests.timezone=Europe/Berlin -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>   [junit4] FAILURE 1.46s J1 | TestGeo3DPoint.testGeo3DRelations <<<
>   [junit4]> Throwable #1: java.lang.AssertionError: invalid bounds for 
> shape=GeoStandardPath: {planetmodel=PlanetModel.WGS84, 
> width=1.117010721276371(64.0), points={[[lat=2.18531083006635E-12, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=2.187755873813378E-12])], [lat=0.0, 
> lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
> Z=0.0])]]}}
>   [junit4]>   at 
> __randomizedtesting.SeedInfo.seed([1F71744AE2101863:AF0E09DE6D5DB6FF]:0)
>   [junit4]>   at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:259)
>   [junit4]>   at java.lang.Thread.run(Thread.java:745)
>   [junit4] IGNOR/A 0.00s J1 | TestGeo3DPoint.testRandomBig
>   [junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
>   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62), 
> sim=RandomSimilarity(queryNorm=false,coord=yes): {}, locale=pt-PT, 
> timezone=Europe/Berlin
>   [junit4]   2> NOTE: Linux 3.13.0-85-generic amd64/Oracle Corporation 
> 1.8.0_74 (64-bit)/cpus=4,threads=1,free=256210224,total=354418688
>   [junit4]   2> NOTE: All tests run in this JVM: [TestGeo3DPoint]
>   [junit4] Completed [10/11 (1!)] on J1 in 37.22s, 14 tests, 1 failure, 1 
> skipped <<< FAILURES!
> {noformat}
> Reproduces for me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7357) TestGeo3DPoint.testGeo3DRelations() failure: invalid bounds for shape=GeoStandardPath

2016-06-27 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-7357:
--

 Summary: TestGeo3DPoint.testGeo3DRelations() failure: invalid 
bounds for shape=GeoStandardPath
 Key: LUCENE-7357
 URL: https://issues.apache.org/jira/browse/LUCENE-7357
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/spatial3d
Reporter: Steve Rowe


>From [https://builds.apache.org/job/Lucene-Solr-Tests-master/1228/]:

{noformat}
Checking out Revision 46c827e31a5534bb032d0803318d01309bf0195c 
(refs/remotes/origin/master)
[...]
  [junit4] Suite: org.apache.lucene.spatial3d.TestGeo3DPoint
  [junit4]   1> doc=1544 is contained by shape but is outside the returned 
XYZBounds
  [junit4]   1>   unquantized=[lat=-2.848117399637174E-91, 
lon=-1.1092122135274942([X=0.44586529864043345, Y=-0.8963498732568058, 
Z=-2.851304027160807E-91])]
  [junit4]   1>   quantized=[X=0.44586529870253566, Y=-0.8963498734280969, 
Z=-2.3309121299774915E-10]
  [junit4]   1>   shape=GeoStandardPath: {planetmodel=PlanetModel.WGS84, 
width=1.117010721276371(64.0), points={[[lat=2.18531083006635E-12, 
lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
Z=2.187755873813378E-12])], [lat=0.0, 
lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
Z=0.0])]]}}
  [junit4]   1>   bounds=XYZBounds: [xmin=-1.0011188549924792 
xmax=0.4449938894797613 ymin=-1.0011188549924792 ymax=1.0011188549924792 
zmin=-0.9977622930221051 zmax=0.9977622930221051]
  [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
-Dtests.method=testGeo3DRelations -Dtests.seed=1F71744AE2101863 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=pt-PT 
-Dtests.timezone=Europe/Berlin -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
  [junit4] FAILURE 1.46s J1 | TestGeo3DPoint.testGeo3DRelations <<<
  [junit4]> Throwable #1: java.lang.AssertionError: invalid bounds for 
shape=GeoStandardPath: {planetmodel=PlanetModel.WGS84, 
width=1.117010721276371(64.0), points={[[lat=2.18531083006635E-12, 
lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
Z=2.187755873813378E-12])], [lat=0.0, 
lon=-3.141592653589793([X=-1.0011188539924791, Y=-1.226017000107956E-16, 
Z=0.0])]]}}
  [junit4]> at 
__randomizedtesting.SeedInfo.seed([1F71744AE2101863:AF0E09DE6D5DB6FF]:0)
  [junit4]> at 
org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:259)
  [junit4]> at java.lang.Thread.run(Thread.java:745)
  [junit4] IGNOR/A 0.00s J1 | TestGeo3DPoint.testRandomBig
  [junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
  [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62), 
sim=RandomSimilarity(queryNorm=false,coord=yes): {}, locale=pt-PT, 
timezone=Europe/Berlin
  [junit4]   2> NOTE: Linux 3.13.0-85-generic amd64/Oracle Corporation 1.8.0_74 
(64-bit)/cpus=4,threads=1,free=256210224,total=354418688
  [junit4]   2> NOTE: All tests run in this JVM: [TestGeo3DPoint]
  [junit4] Completed [10/11 (1!)] on J1 in 37.22s, 14 tests, 1 failure, 1 
skipped <<< FAILURES!
{noformat}

Reproduces for me.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9248) HttpSolrClient not compatible with compression option

2016-06-27 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350981#comment-15350981
 ] 

Mark Miller commented on SOLR-9248:
---

This is an interesting issue [~mdrob]

> HttpSolrClient not compatible with compression option
> -
>
> Key: SOLR-9248
> URL: https://issues.apache.org/jira/browse/SOLR-9248
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Affects Versions: 5.5, 5.5.1
>Reporter: Gary Lee
>
> Since Solr 5.5, using the compression option 
> (solrClient.setAllowCompression(true)) causes the HTTP client to quickly run 
> out of connections in the connection pool. After debugging through this, we 
> found that the GZIPInputStream is incompatible with changes to how the 
> response input stream is closed in 5.5. It is at this point when the 
> GZIPInputStream throws an EOFException, and while this is silently eaten up, 
> the net effect is that the stream is never closed, leaving the connection 
> open. After a number of requests, the pool is exhausted and no further 
> requests can be served.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9254) TestGraphTermsQParserPlugin.testQueries() NullPointerException

2016-06-27 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350966#comment-15350966
 ] 

Steve Rowe commented on SOLR-9254:
--

The above seed reproduces for me on master too.

> TestGraphTermsQParserPlugin.testQueries() NullPointerException
> --
>
> Key: SOLR-9254
> URL: https://issues.apache.org/jira/browse/SOLR-9254
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>
> My Jenkins found a reproducing seed on branch_6x:
> {noformat}
> Checking out Revision d1a047ad6f24078f23c9b4adf15210ac8a6e8f8a 
> (refs/remotes/origin/branch_6x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGraphTermsQParserPlugin -Dtests.method=testQueries 
> -Dtests.seed=E47472DC605D2D21 -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=sr-Latn-ME -Dtests.timezone=America/Guadeloupe 
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   0.06s J11 | TestGraphTermsQParserPlugin.testQueries <<<
>[junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
> query
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([E47472DC605D2D21:B8FABE077A34988F]:0)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
>[junit4]>  at 
> org.apache.solr.search.TestGraphTermsQParserPlugin.testQueries(TestGraphTermsQParserPlugin.java:76)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.rewrite(GraphTermsQParserPlugin.java:223)
>[junit4]>  at 
> org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.bulkScorer(GraphTermsQParserPlugin.java:252)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
>[junit4]>  at 
> org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:261)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1818)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1635)
>[junit4]>  at 
> org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:644)
>[junit4]>  at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:528)
>[junit4]>  at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
>[junit4]>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
>[junit4]>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2035)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:310)
>[junit4]>  at 
> org.apache.solr.util.TestHarness.query(TestHarness.java:292)
>[junit4]>  at 
> org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
>[junit4]>  ... 41 more
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
> {test_tl=PostingsFormat(name=Direct), _version_=BlockTreeOrds(blocksize=128), 
> test_ti=BlockTreeOrds(blocksize=128), term_s=PostingsFormat(name=Asserting), 
> test_tf=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
> group_s=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
> maxPointsInLeafNode=1102, maxMBSortInHeap=5.004024995692577, 
> sim=ClassicSimilarity, locale=sr-Latn-ME, timezone=America/Guadeloupe
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_77 (64-bit)/cpus=16,threads=1,free=283609904,total=531628032
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-06-27 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350963#comment-15350963
 ] 

Steve Rowe commented on SOLR-9027:
--

Reproducing NPE on {{TestGraphTermsQParserPlugin.testQueries()}}: SOLR-9254 

> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 6.1
>
> Attachments: SOLR-9027.patch, SOLR-9027.patch, SOLR-9027.patch, 
> SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7343) Cleanup GeoPoint Query implementation

2016-06-27 Thread Nicholas Knize (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicholas Knize resolved LUCENE-7343.

Resolution: Fixed

> Cleanup GeoPoint Query implementation
> -
>
> Key: LUCENE-7343
> URL: https://issues.apache.org/jira/browse/LUCENE-7343
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Nicholas Knize
> Attachments: LUCENE-7343.patch
>
>
> This is a cleanup task to simplify and trim dead code from GeoPointField's 
> query classes. Much of the relation logic in {{LatLonPoint}} can also be 
> applied to GeoPointField's {{CellComparator}} class eliminating the need to 
> carry its own separate relation methods.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9027) Add GraphTermsQuery to limit traversal on high frequency nodes

2016-06-27 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9027?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-9027:
-
Fix Version/s: 6.1

> Add GraphTermsQuery to limit traversal on high frequency nodes
> --
>
> Key: SOLR-9027
> URL: https://issues.apache.org/jira/browse/SOLR-9027
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Priority: Minor
> Fix For: 6.1
>
> Attachments: SOLR-9027.patch, SOLR-9027.patch, SOLR-9027.patch, 
> SOLR-9027.patch
>
>
> The gatherNodes() Streaming Expression is currently using a basic disjunction 
> query to perform the traversals. This ticket is to create a specific 
> GraphTermsQuery for performing the traversals. 
> The GraphTermsQuery will be based off of the TermsQuery, but will also 
> include an option for a docFreq cutoff. Terms that are above the docFreq 
> cutoff will not be included in the query. This will help users do a more 
> precise and efficient traversal.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9254) TestGraphTermsQParserPlugin.testQueries() NullPointerException

2016-06-27 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-9254:


 Summary: TestGraphTermsQParserPlugin.testQueries() 
NullPointerException
 Key: SOLR-9254
 URL: https://issues.apache.org/jira/browse/SOLR-9254
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Steve Rowe


My Jenkins found a reproducing seed on branch_6x:

{noformat}
Checking out Revision d1a047ad6f24078f23c9b4adf15210ac8a6e8f8a 
(refs/remotes/origin/branch_6x)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestGraphTermsQParserPlugin -Dtests.method=testQueries 
-Dtests.seed=E47472DC605D2D21 -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=sr-Latn-ME -Dtests.timezone=America/Guadeloupe 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.06s J11 | TestGraphTermsQParserPlugin.testQueries <<<
   [junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
query
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([E47472DC605D2D21:B8FABE077A34988F]:0)
   [junit4]>at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:781)
   [junit4]>at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:748)
   [junit4]>at 
org.apache.solr.search.TestGraphTermsQParserPlugin.testQueries(TestGraphTermsQParserPlugin.java:76)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]> Caused by: java.lang.NullPointerException
   [junit4]>at 
org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.rewrite(GraphTermsQParserPlugin.java:223)
   [junit4]>at 
org.apache.solr.search.GraphTermsQParserPlugin$GraphTermsQuery$1.bulkScorer(GraphTermsQParserPlugin.java:252)
   [junit4]>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:666)
   [junit4]>at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:473)
   [junit4]>at 
org.apache.solr.search.SolrIndexSearcher.buildAndRunCollectorChain(SolrIndexSearcher.java:261)
   [junit4]>at 
org.apache.solr.search.SolrIndexSearcher.getDocListNC(SolrIndexSearcher.java:1818)
   [junit4]>at 
org.apache.solr.search.SolrIndexSearcher.getDocListC(SolrIndexSearcher.java:1635)
   [junit4]>at 
org.apache.solr.search.SolrIndexSearcher.search(SolrIndexSearcher.java:644)
   [junit4]>at 
org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:528)
   [junit4]>at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:293)
   [junit4]>at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:154)
   [junit4]>at 
org.apache.solr.core.SolrCore.execute(SolrCore.java:2035)
   [junit4]>at 
org.apache.solr.util.TestHarness.query(TestHarness.java:310)
   [junit4]>at 
org.apache.solr.util.TestHarness.query(TestHarness.java:292)
   [junit4]>at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:755)
   [junit4]>... 41 more
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene62): 
{test_tl=PostingsFormat(name=Direct), _version_=BlockTreeOrds(blocksize=128), 
test_ti=BlockTreeOrds(blocksize=128), term_s=PostingsFormat(name=Asserting), 
test_tf=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
group_s=PostingsFormat(name=LuceneVarGapDocFreqInterval)}, docValues:{}, 
maxPointsInLeafNode=1102, maxMBSortInHeap=5.004024995692577, 
sim=ClassicSimilarity, locale=sr-Latn-ME, timezone=America/Guadeloupe
   [junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
1.8.0_77 (64-bit)/cpus=16,threads=1,free=283609904,total=531628032
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+124) - Build # 980 - Failure!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/980/
Java: 64bit/jdk-9-ea+124 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Mon Jun 27 00:22:28 
GMT-12:00 2016

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Mon Jun 27 00:22:28 GMT-12:00 2016
at 
__randomizedtesting.SeedInfo.seed([C4E68906C87BAC2D:1F4D89C0CD53C59E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1508)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:858)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

RE: Early Access builds of JDK 8u112 b01, JDK 9 b124 are available on java.net

2016-06-27 Thread Uwe Schindler
Hallo,

 

I installed this version on Saturday: All looks fine up to now.

 

Uwe

 

-

Uwe Schindler

H.-H.-Meier-Allee 63, D-28213 Bremen

  http://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Rory O'Donnell [mailto:rory.odonn...@oracle.com] 
Sent: Monday, June 27, 2016 11:27 AM
To: Uwe Schindler 
Cc: rory.odonn...@oracle.com; Dalibor Topic ; 
Balchandra Vaidya ; Muneer Kolarkunnu 
; Dawid Weiss ; 
dev@lucene.apache.org
Subject: Early Access builds of JDK 8u112 b01, JDK 9 b124 are available on 
java.net

 


Hi Uwe & Dawid, 

Early Access b124   for JDK 9 is available on 
java.net, summary of  changes are listed here  
 .

Early Access b123   ( #5178) for JDK 9 with 
Project Jigsaw is available on java.net, summary of  changes are listed  here 
 

Early Access b01   for JDK 8u112 is 
available on java.net.

Update to JEP 261 : Module System - email from Mark Reinhold [1]

- The special ALL-DEFAULT module name, which represents the default set of root 
modules for use with the -addmods option [2]; 
- A more thorough explanation of how the built-in class loaders have changed, 
how built-in modules are assigned to each loader, 
   and how these loaders work together to load classes [3]; and 
- The reason why Java EE-related modules are no longer resolved by default [4]. 
- There are various other minor corrections and clarifications, as can be seen 
in the detailed diff [5]. 


Rgds,Rory 

[1] http://mail.openjdk.java.net/pipermail/jigsaw-dev/2016-June/008227.html
[2] http://openjdk.java.net/jeps/261#ALL-DEFAULT
[3] http://openjdk.java.net/jeps/261#Class-loaders
[4] http://openjdk.java.net/jeps/261#EE-modules
[5] http://cr.openjdk.java.net/~mr/jigsaw/jeps/updates/261-2016-06-15.html 
 
-- 
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA, Dublin,Ireland


[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build # 5938 - Still Failing!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5938/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [InternalHttpClient]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [InternalHttpClient]
at __randomizedtesting.SeedInfo.seed([83530D6BAED99CB3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11986 lines...]
   [junit4] Suite: org.apache.solr.cloud.CdcrVersionReplicationTest
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.cloud.CdcrVersionReplicationTest_83530D6BAED99CB3-001\init-core-data-001
   [junit4]   2> 2037496 INFO  
(SUITE-CdcrVersionReplicationTest-seed#[83530D6BAED99CB3]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 2037497 INFO  
(SUITE-CdcrVersionReplicationTest-seed#[83530D6BAED99CB3]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /yvp/w
   [junit4]   2> 2037501 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[83530D6BAED99CB3]) [ 
   ] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2037503 INFO  (Thread-5714) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2037503 INFO  (Thread-5714) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2037602 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[83530D6BAED99CB3]) [ 
   ] o.a.s.c.ZkTestServer start zk server on port:57248
   [junit4]   2> 2037602 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[83530D6BAED99CB3]) [ 
   ] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2037603 INFO  
(TEST-CdcrVersionReplicationTest.testCdcrDocVersions-seed#[83530D6BAED99CB3]) [ 
   ] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 2037612 INFO  (zkCallback-2871-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@38d42304 
name:ZooKeeperConnection Watcher:127.0.0.1:57248 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1054 - Still Failing

2016-06-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1054/

10 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:41651/_qa: KeeperErrorCode = NoNode for 
/overseer/collection-queue-work/qnr-86

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:41651/_qa: KeeperErrorCode = NoNode for 
/overseer/collection-queue-work/qnr-86
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1599)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1620)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.deleteCollectionWithDownNodes(CollectionsAPIDistributedZkTest.java:345)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:186)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Early Access builds of JDK 8u112 b01, JDK 9 b124 are available on java.net

2016-06-27 Thread Rory O'Donnell


Hi Uwe & Dawid,

Early Access b124  for JDK 9 is 
available on java.net, summary of  changes are listed here 
.


Early Access b123  (#5178) for JDK 9 with 
Project Jigsaw is available on java.net, summary of  changes are listed 
here 



Early Access b01  for JDK 8u112 is 
available on java.net.


Update to JEP 261 : Module System - email from Mark Reinhold [1]

- The special ALL-DEFAULT module name, which represents the default set 
of root modules for use with the -addmods option [2];
- A more thorough explanation of how the built-in class loaders have 
changed, how built-in modules are assigned to each loader,

   and how these loaders work together to load classes [3]; and
- The reason why Java EE-related modules are no longer resolved by 
default [4].
- There are various other minor corrections and clarifications, as can 
be seen in the detailed diff [5].



Rgds,Rory

[1]http://mail.openjdk.java.net/pipermail/jigsaw-dev/2016-June/008227.html
[2]http://openjdk.java.net/jeps/261#ALL-DEFAULT
[3]http://openjdk.java.net/jeps/261#Class-loaders
[4]http://openjdk.java.net/jeps/261#EE-modules
[5]http://cr.openjdk.java.net/~mr/jigsaw/jeps/updates/261-2016-06-15.html 



--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA, Dublin,Ireland



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_92) - Build # 17074 - Failure!

2016-06-27 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17074/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'P val' for path 'response/params/y/p' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":2, "params":{   "x":{ "a":"A val", 
"b":"B val", "":{"v":0}},   "y":{ "c":"CY val modified",
 "b":"BY val", "i":20, "d":[   "val 1",   
"val 2"], "e":"EY val", "":{"v":1},  from server:  
https://127.0.0.1:40384/vgbz/b/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'P val' for path 
'response/params/y/p' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":2,
"params":{
  "x":{
"a":"A val",
"b":"B val",
"":{"v":0}},
  "y":{
"c":"CY val modified",
"b":"BY val",
"i":20,
"d":[
  "val 1",
  "val 2"],
"e":"EY val",
"":{"v":1},  from server:  
https://127.0.0.1:40384/vgbz/b/collection1
at 
__randomizedtesting.SeedInfo.seed([6BDA9DCB9801C5D9:E38EA21136FDA821]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:481)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:215)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-9253) solrcloud goes dowm

2016-06-27 Thread Junfeng Mu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15350578#comment-15350578
 ] 

Junfeng Mu commented on SOLR-9253:
--

see attachment to get the connection code 

> solrcloud goes dowm
> ---
>
> Key: SOLR-9253
> URL: https://issues.apache.org/jira/browse/SOLR-9253
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 4.9.1
> Environment: jboss, zookeeper
>Reporter: Junfeng Mu
> Attachments: 20160627161845.png, javacore.165.txt
>
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> We use solrcloud in our project. now we use solr, but the data grows bigger 
> and bigger, so we want to switch to solrcloud, however, once we switch to 
> solrcloud, solrcloud goes down, It seems that solrcloud blocked, can not deal 
> with the new query, please see the attachments and help us ASAP. Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >