[jira] [Created] (SOLR-9127) XLSX response writer - do we want it?

2016-05-17 Thread Tony Moriarty (JIRA)
Tony Moriarty created SOLR-9127:
---

 Summary: XLSX response writer - do we want it?
 Key: SOLR-9127
 URL: https://issues.apache.org/jira/browse/SOLR-9127
 Project: Solr
  Issue Type: Improvement
  Components: Response Writers
Reporter: Tony Moriarty
Priority: Minor
 Fix For: 6.0, 5.5


I recently open sourced an XLSX response writer based on solr 4.6 and apache 
poi.

https://github.com/desultir/SolrXLSXResponseWriter

Is this something the community would be interested in bringing into the solr 
codebase? I'm willing to put the work into porting it to solr5 and solr6 if the 
community is interested, happy to leave it as a plugin otherwise.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1147 - Failure

2016-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1147/

2 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testCommitWithin

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([BFBC818953F07D3C:56EEEF1D0DE9329]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:785)
at 
org.apache.solr.update.AutoCommitTest.testCommitWithin(AutoCommitTest.java:325)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=1]
xml response was: 

00


request was:q=id:529=standard=0=20=2.2
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:778)
... 40 more


FAILED:  org.apache.solr.update.AutoCommitTest.testMaxDocs

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during 

[JENKINS] Lucene-Solr-NightlyTests-6.0 - Build # 12 - Still Failing

2016-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.0/12/

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:50480/_/w

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:50480/_/w
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:588)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.makeRequest(CollectionsAPIDistributedZkTest.java:382)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:440)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test(CollectionsAPIDistributedZkTest.java:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (LUCENE-7278) Make template Calendar configurable in DateRangePrefixTree

2016-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288148#comment-15288148
 ] 

ASF subversion and git services commented on LUCENE-7278:
-

Commit 14af6994ea2734e91616f7f23ed90c7b4f050018 in lucene-solr's branch 
refs/heads/branch_6_0 from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=14af699 ]

LUCENE-7278: DRPT: fix test bug (when milli is 0)
(cherry picked from commit bd3e568)


> Make template Calendar configurable in DateRangePrefixTree
> --
>
> Key: LUCENE-7278
> URL: https://issues.apache.org/jira/browse/LUCENE-7278
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1
>
> Attachments: LUCENE_7278.patch, LUCENE_7278.patch
>
>
> DateRangePrefixTree (a SpatialPrefixTree designed for dates and date ranges) 
> currently uses a hard-coded Calendar template for making new instances.  This 
> ought to be configurable so that, for example, the Gregorian change date can 
> be configured.  This is particularly important for compatibility with Java 
> 8's java.time API which uses the Gregorian calendar for all time (there is no 
> use of Julian prior to 1582).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7278) Make template Calendar configurable in DateRangePrefixTree

2016-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288146#comment-15288146
 ] 

ASF subversion and git services commented on LUCENE-7278:
-

Commit bd3e568754ac0b4b96e4a955387c413e0770e871 in lucene-solr's branch 
refs/heads/branch_6x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bd3e568 ]

LUCENE-7278: DRPT: fix test bug (when milli is 0)
(cherry picked from commit 2accf12)


> Make template Calendar configurable in DateRangePrefixTree
> --
>
> Key: LUCENE-7278
> URL: https://issues.apache.org/jira/browse/LUCENE-7278
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1
>
> Attachments: LUCENE_7278.patch, LUCENE_7278.patch
>
>
> DateRangePrefixTree (a SpatialPrefixTree designed for dates and date ranges) 
> currently uses a hard-coded Calendar template for making new instances.  This 
> ought to be configurable so that, for example, the Gregorian change date can 
> be configured.  This is particularly important for compatibility with Java 
> 8's java.time API which uses the Gregorian calendar for all time (there is no 
> use of Julian prior to 1582).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7278) Make template Calendar configurable in DateRangePrefixTree

2016-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288143#comment-15288143
 ] 

ASF subversion and git services commented on LUCENE-7278:
-

Commit 2accf12d710f743b51bbc24f613a36f51b572e37 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2accf12 ]

LUCENE-7278: DRPT: fix test bug (when milli is 0)


> Make template Calendar configurable in DateRangePrefixTree
> --
>
> Key: LUCENE-7278
> URL: https://issues.apache.org/jira/browse/LUCENE-7278
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1
>
> Attachments: LUCENE_7278.patch, LUCENE_7278.patch
>
>
> DateRangePrefixTree (a SpatialPrefixTree designed for dates and date ranges) 
> currently uses a hard-coded Calendar template for making new instances.  This 
> ought to be configurable so that, for example, the Gregorian change date can 
> be configured.  This is particularly important for compatibility with Java 
> 8's java.time API which uses the Gregorian calendar for all time (there is no 
> use of Julian prior to 1582).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build # 188 - Failure!

2016-05-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/188/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.lucene.store.TestRAFDirectory.testListAllIsSorted

Error Message:
access denied ("java.io.FilePermission" 
"C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_712AEE406594C8C0-001\tempDir-003\con"
 "write")

Stack Trace:
java.security.AccessControlException: access denied ("java.io.FilePermission" 
"C:\Users\jenkins\workspace\Lucene-Solr-6.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestRAFDirectory_712AEE406594C8C0-001\tempDir-003\con"
 "write")
at 
__randomizedtesting.SeedInfo.seed([712AEE406594C8C0:EB064932A5AB4749]:0)
at 
java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at 
java.security.AccessController.checkPermission(AccessController.java:884)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:549)
at java.lang.SecurityManager.checkWrite(SecurityManager.java:979)
at sun.nio.fs.WindowsChannelFactory.open(WindowsChannelFactory.java:295)
at 
sun.nio.fs.WindowsChannelFactory.newFileChannel(WindowsChannelFactory.java:162)
at 
sun.nio.fs.WindowsFileSystemProvider.newByteChannel(WindowsFileSystemProvider.java:225)
at 
java.nio.file.spi.FileSystemProvider.newOutputStream(FileSystemProvider.java:434)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newOutputStream(FilterFileSystemProvider.java:197)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newOutputStream(FilterFileSystemProvider.java:197)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newOutputStream(HandleTrackingFS.java:129)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newOutputStream(HandleTrackingFS.java:129)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newOutputStream(FilterFileSystemProvider.java:197)
at java.nio.file.Files.newOutputStream(Files.java:216)
at 
org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:408)
at 
org.apache.lucene.store.FSDirectory$FSIndexOutput.(FSDirectory.java:404)
at 
org.apache.lucene.store.FSDirectory.createOutput(FSDirectory.java:253)
at 
org.apache.lucene.store.BaseDirectoryTestCase.testListAllIsSorted(BaseDirectoryTestCase.java:1279)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
   

[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-05-17 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15288019#comment-15288019
 ] 

Ishan Chattopadhyaya commented on SOLR-5944:


Thanks for your analysis, Hoss. I'll take a deeper look as soon as possible. A 
pattern I have observed with such failures (and these failures are the ones I 
was referring to in the past) that documents get in trouble immediately after 
or during a commit (i.e. between the commit start and end) happening in a 
parallel thread.

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.62D328FA1DEA57FD.fail2.txt, 
> hoss.62D328FA1DEA57FD.fail3.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build # 5850 - Still Failing!

2016-05-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5850/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [InternalHttpClient]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [InternalHttpClient]
at __randomizedtesting.SeedInfo.seed([6829580BAC95E31F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:255)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'CY val' for path 'response/params/y/c' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":0, "params":{"x":{ "a":"A val", "b":"B 
val", "":{"v":0},  from server:  https://127.0.0.1:64205/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val' for path 
'response/params/y/c' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":0,
"params":{"x":{
"a":"A val",
"b":"B val",
"":{"v":0},  from server:  https://127.0.0.1:64205/collection1
at 
__randomizedtesting.SeedInfo.seed([6829580BAC95E31F:E07D67D102698EE7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:457)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:160)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 

[jira] [Updated] (SOLR-5944) Support updates of numeric DocValues

2016-05-17 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-5944:
---
Attachment: hoss.62D328FA1DEA57FD.fail3.txt
hoss.62D328FA1DEA57FD.fail2.txt
hoss.62D328FA1DEA57FD.fail.txt
hoss.D768DD9443A98DC.fail.txt
hoss.D768DD9443A98DC.pass.txt

I've been reviewing the logs from sarowe's failures -- I won't pretend to 
understand half of what i'm looking at here (I'm still not up on most of the 
new code) but here's some interesting patterns i've noticed...

* in both failure logs posted, doc "13" was the doc having problems
* the specific docId is probably just a coincidence, but it does mean that the 
same egrep command works on both log files to give you the particularly 
interesting bits realtive to the failure...{noformat}
egrep add=\\[13\|id=13\|ids=13 
TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt > 
beast-167.important.txt
egrep add=\\[13\|id=13\|ids=13 
TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt > 
beast-587.important.txt
{noformat}
* looking first at beast-587.important.txt:
** the ERROR that failed the test was first logged by READER2 @ (timestamp) 
34456:{noformat}
34456 ERROR (READER2) [] o.a.s.c.TestStressInPlaceUpdates Realtime=true, 
ERROR, id=13 found={response={numFound=1,start=0,docs=[SolrDocument{id=13, 
title_s=[title13], val1_i_dvo=3, val2_l_dvo=36, 
_version_=1534607778351415296, ratings=0.0, price=0}]}} 
model=[1534607780231512064, 3, 300012]
{noformat}
** Working backwards, that expected version 1534607780231512064 was logged by 
WRITER10 as being returned to a PARTIAL update @ 31219:{noformat}
31219 INFO  (WRITER10) [] o.a.s.c.TestStressInPlaceUpdates PARTIAL: Writing 
id=13, val=[3,300012], version=1534607779993485312, Prev 
was=[3,39].  Returned version=1534607780231512064
{noformat}
*** WRITER10's logging of this "Returned version=1534607780231512064" came 
after core_node1, core_node2, and core_node3 all logged it being written to 
their TLOG & reported it via LogUpdateProc:{noformat}
30557 INFO  (qtp2010985731-180) [n:127.0.0.1:37972__m c:collection1 s:shard1 
r:core_node1 x:collection1] o.a.s.u.UpdateLog TLOG: added id 
13(ver=1534607780231512064, prevVersion=1534607779993485312, prevPtr=2343) to 
tlog{file=/tmp/beast-tmp-output/587/J0/temp/solr.cloud.TestStressInPlaceUpdates_FFC46C473EC471E6-001/shard-1-001/cores/collection1/data/tlog/tlog.004
 refcount=1} LogPtr(2396) map=1977112331, actual doc=SolrInputDocument(fields: 
[id=13, val2_l_dvo=300012, _version_=1534607780231512064, val1_i_dvo=3])
30589 INFO  (qtp1755078679-232) [n:127.0.0.1:38407__m c:collection1 s:shard1 
r:core_node2 x:collection1] o.a.s.u.UpdateLog TLOG: added id 
13(ver=1534607780231512064, prevVersion=1534607779993485312, prevPtr=2343) to 
tlog{file=/tmp/beast-tmp-output/587/J0/temp/solr.cloud.TestStressInPlaceUpdates_FFC46C473EC471E6-001/shard-2-001/cores/collection1/data/tlog/tlog.002
 refcount=1} LogPtr(2396) map=1630836284, actual doc=SolrInputDocument(fields: 
[id=13, val2_l_dvo=300012, _version_=1534607780231512064, val1_i_dvo=3])
30589 INFO  (qtp1755078679-232) [n:127.0.0.1:38407__m c:collection1 s:shard1 
r:core_node2 x:collection1] o.a.s.u.p.LogUpdateProcessorFactory [collection1]  
webapp=/_m path=/update 
params={update.distrib=FROMLEADER=http://127.0.0.1:37972/_m/collection1/=1534607779993485312=javabin=2=true}{add=[13
 (1534607780231512064)]} 0 0
31216 INFO  (qtp2143623462-144) [n:127.0.0.1:58295__m c:collection1 s:shard1 
r:core_node3 x:collection1] o.a.s.u.UpdateLog TLOG: added id 
13(ver=1534607780231512064, prevVersion=1534607779993485312, prevPtr=2343) to 
tlog{file=/tmp/beast-tmp-output/587/J0/temp/solr.cloud.TestStressInPlaceUpdates_FFC46C473EC471E6-001/shard-3-001/cores/collection1/data/tlog/tlog.002
 refcount=1} LogPtr(2396) map=1500522809, actual doc=SolrInputDocument(fields: 
[id=13, val2_l_dvo=300012, _version_=1534607780231512064, val1_i_dvo=3])
31216 INFO  (qtp2143623462-144) [n:127.0.0.1:58295__m c:collection1 s:shard1 
r:core_node3 x:collection1] o.a.s.u.p.LogUpdateProcessorFactory [collection1]  
webapp=/_m path=/update 
params={update.distrib=FROMLEADER=http://127.0.0.1:37972/_m/collection1/=1534607779993485312=javabin=2=true}{add=[13
 (1534607780231512064)]} 0 0
31219 INFO  (qtp2010985731-180) [n:127.0.0.1:37972__m c:collection1 s:shard1 
r:core_node1 x:collection1] o.a.s.u.p.LogUpdateProcessorFactory [collection1]  
webapp=/_m path=/update params={versions=true=javabin=2}{add=[13 
(1534607780231512064)]} 0 662
{noformat}
** but looking *after* the ERROR was first logged @ 34456, we see that before 
the test had a chance to shut down all the nodes, there was some suspicious 
looking logging from core_node2 regarding updates out of order, that refer to 
the expected 

Re: [JENKINS] Lucene-Solr-Tests-6.x - Build # 206 - Still Failing

2016-05-17 Thread David Smiley
I'll dig.

On Tue, May 17, 2016 at 7:01 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/206/
>
> 2 tests failed.
> FAILED:
> org.apache.lucene.spatial.prefix.tree.DateRangePrefixTreeTest.testRoundTrip
> {p0=java.util.GregorianCalendar[time=?,areFieldsSet=false,areAllFieldsSet=false,lenient=true,zone=sun.util.calendar.ZoneInfo[id="UTC",offset=0,dstSavings=0,useDaylight=false,transitions=0,lastRule=null],firstDayOfWeek=2,minimalDaysInFirstWeek=4,ERA=?,YEAR=?,MONTH=?,WEEK_OF_YEAR=?,WEEK_OF_MONTH=?,DAY_OF_MONTH=?,DAY_OF_YEAR=?,DAY_OF_WEEK=?,DAY_OF_WEEK_IN_MONTH=?,AM_PM=?,HOUR=?,HOUR_OF_DAY=?,MINUTE=?,SECOND=?,MILLISECOND=?,ZONE_OFFSET=?,DST_OFFSET=?]}
>
> Error Message:
>
>
> Stack Trace:
> java.lang.AssertionError
> at
> __randomizedtesting.SeedInfo.seed([A155F458C3C50705:A39BAE1E5BC980AE]:0)
> at
> org.apache.lucene.spatial.prefix.tree.DateRangePrefixTreeTest.roundTrip(DateRangePrefixTreeTest.java:112)
> at
> org.apache.lucene.spatial.prefix.tree.DateRangePrefixTreeTest.testRoundTrip(DateRangePrefixTreeTest.java:92)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
> at
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at java.lang.Thread.run(Thread.java:745)
>
>
> FAILED:
> 

[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 680 - Failure!

2016-05-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/680/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:44066/solr/testschemaapi_shard1_replica2: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:44066/solr/testschemaapi_shard1_replica2: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([76D8BADCD7F4B186:FE8C85067908DC7E]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:697)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 140 - Still Failing!

2016-05-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/140/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestMiniSolrCloudClusterSSL

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.TestMiniSolrCloudClusterSSL: 1) Thread[id=1644, 
name=OverseerHdfsCoreFailoverThread-95913802742038536-127.0.0.1:37547_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.TestMiniSolrCloudClusterSSL: 
   1) Thread[id=1644, 
name=OverseerHdfsCoreFailoverThread-95913802742038536-127.0.0.1:37547_solr-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([2D665F42D82D4AB9]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestMiniSolrCloudClusterSSL

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=1644, 
name=OverseerHdfsCoreFailoverThread-95913802742038536-127.0.0.1:37547_solr-n_01,
 state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.interrupt0(Native Method) at 
java.lang.Thread.interrupt(Thread.java:923) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=1644, 
name=OverseerHdfsCoreFailoverThread-95913802742038536-127.0.0.1:37547_solr-n_01,
 state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.interrupt0(Native Method)
at java.lang.Thread.interrupt(Thread.java:923)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([2D665F42D82D4AB9]:0)




Build Log:
[...truncated 10593 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestMiniSolrCloudClusterSSL
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/solr/build/solr-core/test/J1/temp/solr.cloud.TestMiniSolrCloudClusterSSL_2D665F42D82D4AB9-001/init-core-data-001
   [junit4]   2> 127253 INFO  
(SUITE-TestMiniSolrCloudClusterSSL-seed#[2D665F42D82D4AB9]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 127271 INFO  
(TEST-TestMiniSolrCloudClusterSSL.testNoSslButSillyClientAuth-seed#[2D665F42D82D4AB9])
 [] o.a.s.SolrTestCaseJ4 ###Starting testNoSslButSillyClientAuth
   [junit4]   2> 127271 INFO  
(TEST-TestMiniSolrCloudClusterSSL.testNoSslButSillyClientAuth-seed#[2D665F42D82D4AB9])
 [] o.a.s.c.TestMiniSolrCloudClusterSSL NOTE: This Test ignores the 
randomized SSL & clientAuth settings selected by base class
   [junit4]   2> 127274 INFO  
(TEST-TestMiniSolrCloudClusterSSL.testNoSslButSillyClientAuth-seed#[2D665F42D82D4AB9])
 [] o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 127274 INFO  (Thread-375) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 127274 INFO  (Thread-375) [] o.a.s.c.ZkTestServer Starting 
server
   [junit4]   2> 127374 INFO  
(TEST-TestMiniSolrCloudClusterSSL.testNoSslButSillyClientAuth-seed#[2D665F42D82D4AB9])
 [] o.a.s.c.ZkTestServer start zk server on port:58174
   [junit4]   2> 127374 INFO  
(TEST-TestMiniSolrCloudClusterSSL.testNoSslButSillyClientAuth-seed#[2D665F42D82D4AB9])
 [] o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 127375 INFO  
(TEST-TestMiniSolrCloudClusterSSL.testNoSslButSillyClientAuth-seed#[2D665F42D82D4AB9])
 [] o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 127384 INFO  (zkCallback-201-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@1428de0d 
name:ZooKeeperConnection Watcher:127.0.0.1:58174 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 127384 INFO  
(TEST-TestMiniSolrCloudClusterSSL.testNoSslButSillyClientAuth-seed#[2D665F42D82D4AB9])
 [] o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 127385 INFO  

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 491 - Still Failing

2016-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/491/

No tests ran.

Build Log:
[...truncated 40509 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (16.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 28.6 MB in 0.03 sec (1041.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 63.1 MB in 0.06 sec (1092.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 73.6 MB in 0.07 sec (1101.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6015 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6015 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 221 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] Releases that don't seem to be tested:
   [smoker]   5.5.1
   [smoker] Traceback (most recent call last):
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1414, in 
   [smoker] main()
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1358, in main
   [smoker] smokeTest(c.java, c.url, c.revision, c.version, c.tmp_dir, 
c.is_signed, ' '.join(c.test_args))
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1396, in smokeTest
   [smoker] unpackAndVerify(java, 'lucene', tmpDir, 'lucene-%s-src.tgz' % 
version, gitRevision, version, testArgs, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 590, in unpackAndVerify
   [smoker] verifyUnpacked(java, project, artifact, unpackPath, 
gitRevision, version, testArgs, tmpDir, baseURL)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 736, in verifyUnpacked
   [smoker] confirmAllReleasesAreTestedForBackCompat(version, unpackPath)
   [smoker]   File 
"/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/dev-tools/scripts/smokeTestRelease.py",
 line 1351, in confirmAllReleasesAreTestedForBackCompat
   [smoker] raise RuntimeError('some releases are not tested by 
TestBackwardsCompatibility?')
   [smoker] RuntimeError: some releases are not tested by 
TestBackwardsCompatibility?

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/build.xml:536:
 exec returned: 1

Total time: 29 minutes 51 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-9-ea+118) - Build # 16771 - Failure!

2016-05-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16771/
Java: 64bit/jdk-9-ea+118 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestManagedSchemaAPI.test

Error Message:
Error from server at http://127.0.0.1:35785/solr/testschemaapi_shard1_replica1: 
ERROR: [doc=2] unknown field 'myNewField1'

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:35785/solr/testschemaapi_shard1_replica1: ERROR: 
[doc=2] unknown field 'myNewField1'
at 
__randomizedtesting.SeedInfo.seed([E2D2AF4F8814BD1E:6A86909526E8D0E6]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:697)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.schema.TestManagedSchemaAPI.testAddFieldAndDocument(TestManagedSchemaAPI.java:86)
at 
org.apache.solr.schema.TestManagedSchemaAPI.test(TestManagedSchemaAPI.java:55)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 588 - Still Failing!

2016-05-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/588/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, TransactionLog, MDCAwareThreadPoolExecutor, 
MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MockDirectoryWrapper, TransactionLog, MDCAwareThreadPoolExecutor, 
MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([1ACD9BAAA084D18C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:255)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12214 lines...]
   [junit4] Suite: org.apache.solr.schema.TestManagedSchemaAPI
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J1/temp/solr.schema.TestManagedSchemaAPI_1ACD9BAAA084D18C-001/init-core-data-001
   [junit4]   2> 4241025 INFO  
(SUITE-TestManagedSchemaAPI-seed#[1ACD9BAAA084D18C]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true)
   [junit4]   2> 4241029 INFO  
(SUITE-TestManagedSchemaAPI-seed#[1ACD9BAAA084D18C]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 4241030 INFO  (Thread-10028) [] o.a.s.c.ZkTestServer 
client port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 4241030 INFO  (Thread-10028) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 4241130 INFO  
(SUITE-TestManagedSchemaAPI-seed#[1ACD9BAAA084D18C]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:42218
   [junit4]   2> 4241130 INFO  
(SUITE-TestManagedSchemaAPI-seed#[1ACD9BAAA084D18C]-worker) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 4241130 INFO  
(SUITE-TestManagedSchemaAPI-seed#[1ACD9BAAA084D18C]-worker) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 4241133 INFO  (zkCallback-3831-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@65a152cf 
name:ZooKeeperConnection Watcher:127.0.0.1:42218 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 4241133 INFO  
(SUITE-TestManagedSchemaAPI-seed#[1ACD9BAAA084D18C]-worker) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 4241134 INFO  

[JENKINS] Lucene-Solr-Tests-6.x - Build # 206 - Still Failing

2016-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/206/

2 tests failed.
FAILED:  
org.apache.lucene.spatial.prefix.tree.DateRangePrefixTreeTest.testRoundTrip 
{p0=java.util.GregorianCalendar[time=?,areFieldsSet=false,areAllFieldsSet=false,lenient=true,zone=sun.util.calendar.ZoneInfo[id="UTC",offset=0,dstSavings=0,useDaylight=false,transitions=0,lastRule=null],firstDayOfWeek=2,minimalDaysInFirstWeek=4,ERA=?,YEAR=?,MONTH=?,WEEK_OF_YEAR=?,WEEK_OF_MONTH=?,DAY_OF_MONTH=?,DAY_OF_YEAR=?,DAY_OF_WEEK=?,DAY_OF_WEEK_IN_MONTH=?,AM_PM=?,HOUR=?,HOUR_OF_DAY=?,MINUTE=?,SECOND=?,MILLISECOND=?,ZONE_OFFSET=?,DST_OFFSET=?]}

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([A155F458C3C50705:A39BAE1E5BC980AE]:0)
at 
org.apache.lucene.spatial.prefix.tree.DateRangePrefixTreeTest.roundTrip(DateRangePrefixTreeTest.java:112)
at 
org.apache.lucene.spatial.prefix.tree.DateRangePrefixTreeTest.testRoundTrip(DateRangePrefixTreeTest.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.lucene.spatial.prefix.tree.DateRangePrefixTreeTest.testRoundTrip 

[jira] [Created] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-05-17 Thread Dmytro Hambal (JIRA)
Dmytro Hambal created LUCENE-7287:
-

 Summary: New lemma-tizer plugin for ukrainian language.
 Key: LUCENE-7287
 URL: https://issues.apache.org/jira/browse/LUCENE-7287
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/analysis
Reporter: Dmytro Hambal
Priority: Minor


Hi all,

I wonder whether you are interested in supporting a plugin which provides a 
mapping between ukrainian word forms and their lemmas. Some tests and docs go 
out-of-the-box =) .

https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer

It's really simple but still works and generates some value for its users.

More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-05-17 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287640#comment-15287640
 ] 

Steve Rowe commented on SOLR-5944:
--

Both of those seeds (FFC46C473EC471E6 and 15E180DC7142CBF3) reproduce for me 
too (only tried each one once). 

A third beasting failure, run 783, does NOT reproduce for me (0 failures out of 
4 runs):

{noformat}
ant test  -Dtestcase=TestStressInPlaceUpdates -Dtests.method=stressTest 
-Dtests.seed=CCB5FA74FA9BB974 -Dtests.slow=true -Dtests.locale=sr 
-Dtests.timezone=Africa/Gaborone -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
{noformat}


> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9110) migrate SubQuery-, Join-, ChildFacet- tests to SolrCloudTestCase

2016-05-17 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-9110:
---
Attachment: SOLR-9110.patch

Ok. got the first breakthrough after all. I migrated SubQueries test.
[~romseygeek], you might be interested in noob's feedback. 

Copying config folder under the given name to Zookeeper, is not obvious at all:
{code}
String configName = "solrCloudCollectionConfig";
int nodeCount = 5;
configureCluster(nodeCount)
   .addConfig(configName, configDir)
   .configure();
{code} 
if it's possible, I ask to simplify it if possible, perhaps more descriptive 
names or perhaps implicit behavior, or so.
Also, I experienced a typical leg-shooting: the test had solrconfig-basic.xml 
w/o update log. Somewhere deep in log it was reported somehow like _ERROR ... 
RecoveryStrategy  No UpdateLog found  cannot recover_. It's not easier to 
figure out the reason looking at log tail. So, ideally there should be a 
circuit-breaker preventing cloud launch without updatelogs. 
These two thoughts is just FYI, not really need to act.   

> migrate SubQuery-, Join-, ChildFacet- tests  to SolrCloudTestCase
> -
>
> Key: SOLR-9110
> URL: https://issues.apache.org/jira/browse/SOLR-9110
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9110.patch
>
>
> I want to migrate the following classes to SolrCloudTestCase 
> * DistribJoinFromCollectionTest
> * TestSubQueryTransformerDistrib
> * BlockJoinFacetDistribTest



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7212) Add Geo3DPoint equivalents of LatLonPointDistanceComparator and LatLonPointSortField

2016-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287617#comment-15287617
 ] 

ASF subversion and git services commented on LUCENE-7212:
-

Commit 8a407f0399c6575d6f4bb087f1d9fdc7d112e5d2 in lucene-solr's branch 
refs/heads/branch_6x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8a407f0 ]

LUCENE-7212: Add Geo3D sorted document fields.


> Add Geo3DPoint equivalents of LatLonPointDistanceComparator and 
> LatLonPointSortField
> 
>
> Key: LUCENE-7212
> URL: https://issues.apache.org/jira/browse/LUCENE-7212
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Karl Wright
>Assignee: Karl Wright
> Attachments: LUCENE-7212.patch, LUCENE-7212.patch
>
>
> Geo3D has a number of distance measurements and a generic way of computing 
> interior distance.  It would be great to take advantage of that for queries 
> that return results ordered by interior distance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7212) Add Geo3DPoint equivalents of LatLonPointDistanceComparator and LatLonPointSortField

2016-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287594#comment-15287594
 ] 

ASF subversion and git services commented on LUCENE-7212:
-

Commit 07af00d8e7bc4ce2820973e2ab511bfe536654c6 in lucene-solr's branch 
refs/heads/master from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=07af00d ]

LUCENE-7212: Add Geo3D sorted document fields.


> Add Geo3DPoint equivalents of LatLonPointDistanceComparator and 
> LatLonPointSortField
> 
>
> Key: LUCENE-7212
> URL: https://issues.apache.org/jira/browse/LUCENE-7212
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Karl Wright
>Assignee: Karl Wright
> Attachments: LUCENE-7212.patch, LUCENE-7212.patch
>
>
> Geo3D has a number of distance measurements and a generic way of computing 
> interior distance.  It would be great to take advantage of that for queries 
> that return results ordered by interior distance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8208) DocTransformer executes sub-queries

2016-05-17 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev resolved SOLR-8208.

Resolution: Fixed

> DocTransformer executes sub-queries
> ---
>
> Key: SOLR-8208
> URL: https://issues.apache.org/jira/browse/SOLR-8208
> Project: Solr
>  Issue Type: Improvement
>  Components: Response Writers
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>  Labels: features, newbie
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8208-distrib-test-fix.patch, SOLR-8208.diff, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, SOLR-8208.patch, 
> SOLR-8208.patch, SOLR-8208.patch
>
>
> The initial idea was to return "from" side of query time join via 
> doctransformer. I suppose it isn't  query-time join specific, thus let to 
> specify any query and parameters for them, let's call it sub-query. But it 
> might be problematic to escape subquery parameters, including local ones, 
> e.g. what if subquery needs to specify own doctransformer in =\[..\] ?
> I suppose we can specify subquery parameter prefix:
> {code}
> ..=name_s:john=*,depts:[subquery fromIndex=departments]&
> depts.q={!term f=dept_id_s 
> v=$row.dept_ss_dv}=text_t,dept_id_s_dv=12=id 
> desc
> {code}   
> response is like
> {code}   
> 
> ...
> 
> 
> 1
> john
> ..
> 
> 
> Engineering
> These guys develop stuff
> 
> 
> Support
> These guys help users
> 
> 
> 
> 
> 
> {code}   
> * {{fl=depts:\[subquery]}} executes a separate request for every query result 
> row, and adds it into a document as a separate result list. The given field 
> name (here it's 'depts') is used as a prefix to shift subquery parameters 
> from main query parameter, eg {{depts.q}} turns to {{q}} for subquery, 
> {{depts.rows}} to {{rows}}.
> * document fields are available as implicit parameters with prefix {{row.}} 
> eg. if result document has a field {{dept_id}} it can be referred as 
> {{v=$row.dept_id}} this combines well with \{!terms} query parser   
> * {{separator=','}} is used when multiple field values are combined in 
> parameter. eg. a document has multivalue field {code}dept_ids={2,3}{code}, 
> thus referring to it via {code}..={!terms f=id 
> v=$row.dept_ids}&..{code} executes a subquery {code}{!terms f=id}2,3{code}. 
> When omitted  it's a comma. 
> * {{fromIndex=othercore}} optional param allows to run subquery on other 
> core, like it works on query time join
> However, it doesn't work on cloud setup (and will let you know), but it's 
> proposed to use regular params ({{collection}}, {{shards}} - whatever, with 
> subquery prefix as below ) to issue subquery to a collection
> {code}
> q=name_s:dave=true=*,depts:[subquery]=20&
> depts.q={!terms f=dept_id_s v=$row.dept_ss_dv}=text_t&
> depts.indent=true&
> depts.collection=departments&
> depts.rows=10=q,fl,rows,row.dept_ss_dv
> {code}
> Caveat: it should be a way slow; it handles only search result page, not 
> entire result set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Limit threadpools by default to 128

2016-05-17 Thread bjoernhaeuser
Github user bjoernhaeuser closed the pull request at:

https://github.com/apache/lucene-solr/pull/11


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8727) Limit Threadpools by default

2016-05-17 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287554#comment-15287554
 ] 

ASF GitHub Bot commented on SOLR-8727:
--

Github user bjoernhaeuser closed the pull request at:

https://github.com/apache/lucene-solr/pull/11


> Limit Threadpools by default
> 
>
> Key: SOLR-8727
> URL: https://issues.apache.org/jira/browse/SOLR-8727
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Affects Versions: 5.2.1
>Reporter: Björn Häuser
>Assignee: Noble Paul
>
> Yesterday we had a problem in our prodution cluster, it was running out of 
> native threads:
> {code}
> null:java.lang.RuntimeException: java.lang.OutOfMemoryError: unable to create 
> new native thread
>   at org.apache.solr.servlet.HttpSolrCall.sendError(HttpSolrCall.java:593)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:465)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Unknown Source)
> Caused by: java.lang.OutOfMemoryError: unable to create new native thread
>   at java.lang.Thread.start0(Native Method)
>   at java.lang.Thread.start(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor.addWorker(Unknown Source)
>   at java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source)
>   at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:135)
>   at java.util.concurrent.ExecutorCompletionService.submit(Unknown Source)
>   at 
> org.apache.solr.handler.component.HttpShardHandler.submit(HttpShardHandler.java:250)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:352)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   ... 22 more
> {code}
> After digging a little bit through the source code I found several 
> ThreadPools which a default maxCoreSize of Integer.MAX_VALUE. I think we 
> should figure out a better default then this.
> Going to create the corresponding pull reuquest on github for this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7286) WeightedSpanTermExtractor.extract() does not recognize SynonymQuery

2016-05-17 Thread Piotr (JIRA)
Piotr created LUCENE-7286:
-

 Summary: WeightedSpanTermExtractor.extract() does not recognize 
SynonymQuery
 Key: LUCENE-7286
 URL: https://issues.apache.org/jira/browse/LUCENE-7286
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/highlighter
Affects Versions: 6.0
Reporter: Piotr


Short description:

In WeightedSpanTermExtractor.extract(...)  method there is a long list of 
supported Queries. There is no SynonymQuery which leads to 
extractUnknownQuery() that does nothing. It would be really nice to have 
SynonymQuery covered as well.

Long description:

I'm trying to highlight an external text using a Highlighter. The query is 
created by QueryParser. If the created query is simple it works like a charm. 
The problem is when parsed query contains SynonymQuery -- it happens, when 
stemmer returns multiple stems, which is not uncommon for Polish language. 

Btw. this is my first jira issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-05-17 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287525#comment-15287525
 ] 

Hoss Man commented on SOLR-5944:


FWIW, I'm testing ishan's latest patch against lucene-solr master and the two 
"reproduce" lines from steve's logs (minus the linedocs path) fail 100% of the 
time for me on my box - although the specific doc listed in the failure message 
varies from run to run, presumably because of the parallel threads? ...

{noformat}
ant test  -Dtestcase=TestStressInPlaceUpdates -Dtests.method=stressTest 
-Dtests.seed=FFC46C473EC471E6 -Dtests.slow=true -Dtests.locale=sr-ME 
-Dtests.timezone=Europe/Riga -Dtests.asserts=true -Dtests.file.encoding=UTF-8

ant test  -Dtestcase=TestStressInPlaceUpdates -Dtests.method=stressTest 
-Dtests.seed=15E180DC7142CBF3 -Dtests.slow=true -Dtests.locale=pt-BR 
-Dtests.timezone=Africa/Juba -Dtests.asserts=true -Dtests.file.encoding=UTF-8
{noformat}



> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7258) Tune DocIdSetBuilder allocation rate

2016-05-17 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287524#comment-15287524
 ] 

Adrien Grand commented on LUCENE-7258:
--

Thanks for the catches, I played with many different options and my comments 
went out of sync with the code. :)

> In ensureBufferCapacity, when buffers.isEmpty, I think the first buffer 
> should have a minimum size of 64 (or 32?), not 1. This will avoid a possible 
> slow start of small buffers when numDocs is 0 or 1. At least I saw this while 
> setting a breakpoint in some spatial tests, seeing the first two buffers both 
> of size one and the 3rd of size two, etc.

Fair enough.

> do you think it might be worth optimizing for the case that there is one 
> buffer that can simply be returned?

Good question. I believe this would only help on small segments, but this 
sounds easy so maybe we should do it.

I'll do the reorderings.

> Tune DocIdSetBuilder allocation rate
> 
>
> Key: LUCENE-7258
> URL: https://issues.apache.org/jira/browse/LUCENE-7258
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial
>Reporter: Jeff Wartes
> Attachments: 
> LUCENE-7258-Tune-memory-allocation-rate-for-Intersec.patch, 
> LUCENE-7258-Tune-memory-allocation-rate-for-Intersec.patch, 
> LUCENE-7258-expanding.patch, allocation_plot.jpg
>
>
> LUCENE-7211 converted IntersectsPrefixTreeQuery to use DocIdSetBuilder, but 
> didn't actually reduce garbage generation for my Solr index.
> Since something like 40% of my garbage (by space) is now attributed to 
> DocIdSetBuilder.growBuffer, I charted a few different allocation strategies 
> to see if I could tune things more. 
> See here: http://i.imgur.com/7sXLAYv.jpg 
> The jump-then-flatline at the right would be where DocIdSetBuilder gives up 
> and allocates a FixedBitSet for a 100M-doc index. (The 1M-doc index 
> curve/cutoff looked similar)
> Perhaps unsurprisingly, the 1/8th growth factor in ArrayUtil.oversize is 
> terrible from an allocation standpoint if you're doing a lot of expansions, 
> and is especially terrible when used to build a short-lived data structure 
> like this one.
> By the time it goes with the FBS, it's allocated around twice as much memory 
> for the buffer as it would have needed for just the FBS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9117) Leaking the first SolrCore after reload

2016-05-17 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-9117.
-
   Resolution: Fixed
 Assignee: Shalin Shekhar Mangar  (was: Erick Erickson)
Fix Version/s: master (7.0)
   6.1

Thanks Jessica!

> Leaking the first SolrCore after reload
> ---
>
> Key: SOLR-9117
> URL: https://issues.apache.org/jira/browse/SOLR-9117
> Project: Solr
>  Issue Type: Bug
>Reporter: Jessica Cheng Mallet
>Assignee: Shalin Shekhar Mangar
>  Labels: core, leak
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9117.patch
>
>
> When a SolrCore for a particular index is created for the first time, it's 
> added to the SolrCores#createdCores map. However, this map doesn't get 
> updated when this core is reloaded, leading to the first SolrCore being 
> leaked.
> Taking a look at how createdCores is used, it seems like it doesn't serve any 
> purpose (its only read is in SolrCores#getAllCoreNames, which includes 
> entries from SolrCores.cores anyway), so I'm proposing a patch to remove the 
> createdCores map completely. However, if someone else knows that createdCores 
> exist for a reason, I'll be happy to change the fix to updating the 
> createdCores map when reload is called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9117) Leaking the first SolrCore after reload

2016-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287508#comment-15287508
 ] 

ASF subversion and git services commented on SOLR-9117:
---

Commit ba7698e4e70f5851e22fb47e2ca595ba983b134a in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ba7698e ]

SOLR-9117: The first SolrCore is leaked after reload
(cherry picked from commit d1202a8)


> Leaking the first SolrCore after reload
> ---
>
> Key: SOLR-9117
> URL: https://issues.apache.org/jira/browse/SOLR-9117
> Project: Solr
>  Issue Type: Bug
>Reporter: Jessica Cheng Mallet
>Assignee: Erick Erickson
>  Labels: core, leak
> Attachments: SOLR-9117.patch
>
>
> When a SolrCore for a particular index is created for the first time, it's 
> added to the SolrCores#createdCores map. However, this map doesn't get 
> updated when this core is reloaded, leading to the first SolrCore being 
> leaked.
> Taking a look at how createdCores is used, it seems like it doesn't serve any 
> purpose (its only read is in SolrCores#getAllCoreNames, which includes 
> entries from SolrCores.cores anyway), so I'm proposing a patch to remove the 
> createdCores map completely. However, if someone else knows that createdCores 
> exist for a reason, I'll be happy to change the fix to updating the 
> createdCores map when reload is called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9117) Leaking the first SolrCore after reload

2016-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287505#comment-15287505
 ] 

ASF subversion and git services commented on SOLR-9117:
---

Commit d1202a8f8d223a6148e79628e63e7677dd4325a6 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d1202a8 ]

SOLR-9117: The first SolrCore is leaked after reload


> Leaking the first SolrCore after reload
> ---
>
> Key: SOLR-9117
> URL: https://issues.apache.org/jira/browse/SOLR-9117
> Project: Solr
>  Issue Type: Bug
>Reporter: Jessica Cheng Mallet
>Assignee: Erick Erickson
>  Labels: core, leak
> Attachments: SOLR-9117.patch
>
>
> When a SolrCore for a particular index is created for the first time, it's 
> added to the SolrCores#createdCores map. However, this map doesn't get 
> updated when this core is reloaded, leading to the first SolrCore being 
> leaked.
> Taking a look at how createdCores is used, it seems like it doesn't serve any 
> purpose (its only read is in SolrCores#getAllCoreNames, which includes 
> entries from SolrCores.cores anyway), so I'm proposing a patch to remove the 
> createdCores map completely. However, if someone else knows that createdCores 
> exist for a reason, I'll be happy to change the fix to updating the 
> createdCores map when reload is called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5944) Support updates of numeric DocValues

2016-05-17 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-5944:
-
Attachment: TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt
TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt

I'm beasting 2000 iterations of {{TestStressInPlaceUpdates}} with Miller's 
beast script against https://github.com/chatman/lucene-solr/tree/solr_5944 at 
revision  eb044ac71 and have so far seen two failures, at iteration 167 and at 
iteration 587, the stdout from which I'm attaching here.

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-797) Construct EmbeddedSolrServer response without serializing/parsing

2016-05-17 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286755#comment-15286755
 ] 

Mikhail Khludnev edited comment on SOLR-797 at 5/17/16 8:21 PM:


Colleagues, what about flipping Commons IO to 2.5 and using [a really 
expandable 
buffer|http://commons.apache.org/proper/commons-io/javadocs/api-release/src-html/org/apache/commons/io/output/ByteArrayOutputStream.html#line.336]


was (Author: mkhludnev):
Colleagues, what about flipping Commons IO to 2.5 and using [a really 
expandable 
buffer|://commons.apache.org/proper/commons-io/javadocs/api-release/src-html/org/apache/commons/io/output/ByteArrayOutputStream.html#line.336]

> Construct EmbeddedSolrServer response without serializing/parsing
> -
>
> Key: SOLR-797
> URL: https://issues.apache.org/jira/browse/SOLR-797
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 1.3
>Reporter: Jonathan Lee
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-797.patch, SOLR-797.patch, SOLR-797.patch
>
>
> Currently, the EmbeddedSolrServer serializes the response and reparses in 
> order to create the final NamedList response.  From the comment in 
> EmbeddedSolrServer.java, the goal is to:
> * convert the response directly into a named list



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3278 - Still Failing!

2016-05-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3278/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test

Error Message:
There are still nodes recoverying - waited for 45 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 45 
seconds
at 
__randomizedtesting.SeedInfo.seed([D6D3999F69ADFC7A:5E87A645C7519182]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:182)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:858)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.test(DistribDocExpirationUpdateProcessorTest.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Comment Edited] (LUCENE-7277) Make Query.hashCode and Query.equals abstract

2016-05-17 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287419#comment-15287419
 ] 

Paul Elschot edited comment on LUCENE-7277 at 5/17/16 7:53 PM:
---

An intermediate class with attributes uses these attributes in its 
equals()/hashCode(), and a leaf class without attributes currently still has 
its class used by the default implementation that is called via super.

The same can be done with sameClassAs() from the patch for equals(), combined 
with a getClass() in an intermediate hashCode() implementation. I think I'll 
give that a try.


was (Author: paul.elsc...@xs4all.nl):
An intermediate class with attributes uses these attributes in their 
equals()/hashCode(), and a leaf class without attributes currently still has 
its class used by the default implementation that is called via super.

The same can be done with sameClassAs() from the patch for equals(), combined 
with a getClass() in an intermediate hashCode() implementation. I think I'll 
give that a try.

> Make Query.hashCode and Query.equals abstract
> -
>
> Key: LUCENE-7277
> URL: https://issues.apache.org/jira/browse/LUCENE-7277
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Attachments: LUCENE-7277.patch
>
>
> Custom subclasses of the Query class have the default implementation of 
> hashCode/equals that make all instances of the subclass equal. If somebody 
> doesn't know this it can be pretty tricky to debug with IndexSearcher's query 
> cache on. 
> Is there any rationale for declaring it this way instead of making those 
> methods abstract (and enforcing their proper implementation in a subclass)?
> {code}
>   public int hashCode() {
> return getClass().hashCode();
>   }
>   public boolean equals(Object obj) {
> if (obj == null)
>   return false;
> return getClass() == obj.getClass();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7277) Make Query.hashCode and Query.equals abstract

2016-05-17 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287419#comment-15287419
 ] 

Paul Elschot commented on LUCENE-7277:
--

An intermediate class with attributes uses these attributes in their 
equals()/hashCode(), and a leaf class without attributes currently still has 
its class used by the default implementation that is called via super.

The same can be done with sameClassAs() from the patch for equals(), combined 
with a getClass() in an intermediate hashCode() implementation. I think I'll 
give that a try.

> Make Query.hashCode and Query.equals abstract
> -
>
> Key: LUCENE-7277
> URL: https://issues.apache.org/jira/browse/LUCENE-7277
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Attachments: LUCENE-7277.patch
>
>
> Custom subclasses of the Query class have the default implementation of 
> hashCode/equals that make all instances of the subclass equal. If somebody 
> doesn't know this it can be pretty tricky to debug with IndexSearcher's query 
> cache on. 
> Is there any rationale for declaring it this way instead of making those 
> methods abstract (and enforcing their proper implementation in a subclass)?
> {code}
>   public int hashCode() {
> return getClass().hashCode();
>   }
>   public boolean equals(Object obj) {
> if (obj == null)
>   return false;
> return getClass() == obj.getClass();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9125) CollapseQParserPlugin allocations are index based, not query based

2016-05-17 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287401#comment-15287401
 ] 

Joel Bernstein commented on SOLR-9125:
--

What I was thinking was to first run the query and get the cardinality. But 
this is really not fun as the CollapsingQParserPlugin would have to know the 
main query and all the filter queries. Doesn't sound like it would be fun to 
write or maintain.

> CollapseQParserPlugin allocations are index based, not query based
> --
>
> Key: SOLR-9125
> URL: https://issues.apache.org/jira/browse/SOLR-9125
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Reporter: Jeff Wartes
>Priority: Minor
>  Labels: collapsingQParserPlugin
>
> Among other things, CollapsingQParserPlugin’s OrdScoreCollector allocates 
> space per-query for: 
> 1 int (doc id) per ordinal
> 1 float (score) per ordinal
> 1 bit (FixedBitSet) per document in the index
>  
> So the higher the cardinality of the thing you’re grouping on, and the more 
> documents in the index, the more memory gets consumed per query. Since high 
> cardinality and large indexes are the use-cases CollapseQParserPlugin was 
> designed for, I thought I'd point this out.
> My real issue is that this does not vary based on the number of results in 
> the query, either before or after collapsing, so a query that results in one 
> doc consumes the same amount of memory as one that returns all of them. All 
> of the Collectors suffer from this to some degree, but I think OrdScore is 
> the worst offender.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7277) Make Query.hashCode and Query.equals abstract

2016-05-17 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287396#comment-15287396
 ] 

Dawid Weiss commented on LUCENE-7277:
-

What I meant was: if they're different Query classes then the equivalence 
should be object-specific, not class-specific, right? Otherwise what's the 
point of having those different classes -- are all of their objects really 
equal?

> Make Query.hashCode and Query.equals abstract
> -
>
> Key: LUCENE-7277
> URL: https://issues.apache.org/jira/browse/LUCENE-7277
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Attachments: LUCENE-7277.patch
>
>
> Custom subclasses of the Query class have the default implementation of 
> hashCode/equals that make all instances of the subclass equal. If somebody 
> doesn't know this it can be pretty tricky to debug with IndexSearcher's query 
> cache on. 
> Is there any rationale for declaring it this way instead of making those 
> methods abstract (and enforcing their proper implementation in a subclass)?
> {code}
>   public int hashCode() {
> return getClass().hashCode();
>   }
>   public boolean equals(Object obj) {
> if (obj == null)
>   return false;
> return getClass() == obj.getClass();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7277) Make Query.hashCode and Query.equals abstract

2016-05-17 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287373#comment-15287373
 ] 

Paul Elschot commented on LUCENE-7277:
--

For leaf classes without overriding this used to work because of the getClass() 
in Query.equals() and Query.hashCode().
Would it be ok use that in the intermediate span classes?

> Make Query.hashCode and Query.equals abstract
> -
>
> Key: LUCENE-7277
> URL: https://issues.apache.org/jira/browse/LUCENE-7277
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Attachments: LUCENE-7277.patch
>
>
> Custom subclasses of the Query class have the default implementation of 
> hashCode/equals that make all instances of the subclass equal. If somebody 
> doesn't know this it can be pretty tricky to debug with IndexSearcher's query 
> cache on. 
> Is there any rationale for declaring it this way instead of making those 
> methods abstract (and enforcing their proper implementation in a subclass)?
> {code}
>   public int hashCode() {
> return getClass().hashCode();
>   }
>   public boolean equals(Object obj) {
> if (obj == null)
>   return false;
> return getClass() == obj.getClass();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9121) ant precommit fails on ant check-lib-versions

2016-05-17 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-9121:
-
Fix Version/s: 6.1

> ant precommit fails on ant check-lib-versions
> -
>
> Key: SOLR-9121
> URL: https://issues.apache.org/jira/browse/SOLR-9121
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Steve Rowe
> Fix For: 6.1
>
> Attachments: SOLR-9121-passthrough.patch, SOLR-9121.patch
>
>
> e.g.  http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16766/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9121) ant precommit fails on ant check-lib-versions

2016-05-17 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-9121.
--
Resolution: Fixed
  Assignee: Steve Rowe

Resolving; Jenkins succeeded on master: 
[http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16769/].

> ant precommit fails on ant check-lib-versions
> -
>
> Key: SOLR-9121
> URL: https://issues.apache.org/jira/browse/SOLR-9121
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Steve Rowe
> Attachments: SOLR-9121-passthrough.patch, SOLR-9121.patch
>
>
> e.g.  http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16766/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7277) Make Query.hashCode and Query.equals abstract

2016-05-17 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287368#comment-15287368
 ] 

Dawid Weiss commented on LUCENE-7277:
-

Yes, absolutely, if you have time, do it, go ahead! I noticed some endpoints of 
inheritance hierarchy (leaf classes) actually do *not* override equals and rely 
on superclasses -- this seems incorrect to me, but perhaps there's a valid 
reason why this is the case.

> Make Query.hashCode and Query.equals abstract
> -
>
> Key: LUCENE-7277
> URL: https://issues.apache.org/jira/browse/LUCENE-7277
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Attachments: LUCENE-7277.patch
>
>
> Custom subclasses of the Query class have the default implementation of 
> hashCode/equals that make all instances of the subclass equal. If somebody 
> doesn't know this it can be pretty tricky to debug with IndexSearcher's query 
> cache on. 
> Is there any rationale for declaring it this way instead of making those 
> methods abstract (and enforcing their proper implementation in a subclass)?
> {code}
>   public int hashCode() {
> return getClass().hashCode();
>   }
>   public boolean equals(Object obj) {
> if (obj == null)
>   return false;
> return getClass() == obj.getClass();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7277) Make Query.hashCode and Query.equals abstract

2016-05-17 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287360#comment-15287360
 ] 

Paul Elschot commented on LUCENE-7277:
--

LGTM, shall I try this on the span queries?

> Make Query.hashCode and Query.equals abstract
> -
>
> Key: LUCENE-7277
> URL: https://issues.apache.org/jira/browse/LUCENE-7277
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Attachments: LUCENE-7277.patch
>
>
> Custom subclasses of the Query class have the default implementation of 
> hashCode/equals that make all instances of the subclass equal. If somebody 
> doesn't know this it can be pretty tricky to debug with IndexSearcher's query 
> cache on. 
> Is there any rationale for declaring it this way instead of making those 
> methods abstract (and enforcing their proper implementation in a subclass)?
> {code}
>   public int hashCode() {
> return getClass().hashCode();
>   }
>   public boolean equals(Object obj) {
> if (obj == null)
>   return false;
> return getClass() == obj.getClass();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.x-Linux (64bit/jdk-9-ea+118) - Build # 678 - Failure!

2016-05-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/678/
Java: 64bit/jdk-9-ea+118 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestMiniSolrCloudCluster.testStopAllStartAll

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([D01CE9A4C8B681AD:A622F6D789812C82]:0)
at sun.nio.ch.Net.bind0(java.base@9-ea/Native Method)
at sun.nio.ch.Net.bind(java.base@9-ea/Net.java:433)
at sun.nio.ch.Net.bind(java.base@9-ea/Net.java:425)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(java.base@9-ea/ServerSocketChannelImpl.java:225)
at 
sun.nio.ch.ServerSocketAdaptor.bind(java.base@9-ea/ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:326)
at 
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at 
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:244)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:384)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:327)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.startJettySolrRunner(MiniSolrCloudCluster.java:368)
at 
org.apache.solr.cloud.TestMiniSolrCloudCluster.testStopAllStartAll(TestMiniSolrCloudCluster.java:443)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9125) CollapseQParserPlugin allocations are index based, not query based

2016-05-17 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287339#comment-15287339
 ] 

Jeff Wartes commented on SOLR-9125:
---

Isn't there a chicken-and-egg situation there? You need the set of matching 
docs to figure out the HLL.cardinality to specify the initial size of the map 
you're going to save the set of matching docs in? 

Or maybe collect() would just throw every doc in the FBS, and finish() would do 
all the finding group heads and collapsing?

> CollapseQParserPlugin allocations are index based, not query based
> --
>
> Key: SOLR-9125
> URL: https://issues.apache.org/jira/browse/SOLR-9125
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Reporter: Jeff Wartes
>Priority: Minor
>  Labels: collapsingQParserPlugin
>
> Among other things, CollapsingQParserPlugin’s OrdScoreCollector allocates 
> space per-query for: 
> 1 int (doc id) per ordinal
> 1 float (score) per ordinal
> 1 bit (FixedBitSet) per document in the index
>  
> So the higher the cardinality of the thing you’re grouping on, and the more 
> documents in the index, the more memory gets consumed per query. Since high 
> cardinality and large indexes are the use-cases CollapseQParserPlugin was 
> designed for, I thought I'd point this out.
> My real issue is that this does not vary based on the number of results in 
> the query, either before or after collapsing, so a query that results in one 
> doc consumes the same amount of memory as one that returns all of them. All 
> of the Collectors suffer from this to some degree, but I think OrdScore is 
> the worst offender.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9125) CollapseQParserPlugin allocations are index based, not query based

2016-05-17 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287265#comment-15287265
 ] 

Joel Bernstein commented on SOLR-9125:
--

One approach that might work for switching to primitive maps, would be first to 
estimate the cardinality of the collapse values in the result set using 
hyperloglog, and then sizing the primitive map accordingly. But my guess is 
this approach is going really hurt performance quite a bit. 



> CollapseQParserPlugin allocations are index based, not query based
> --
>
> Key: SOLR-9125
> URL: https://issues.apache.org/jira/browse/SOLR-9125
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Reporter: Jeff Wartes
>Priority: Minor
>  Labels: collapsingQParserPlugin
>
> Among other things, CollapsingQParserPlugin’s OrdScoreCollector allocates 
> space per-query for: 
> 1 int (doc id) per ordinal
> 1 float (score) per ordinal
> 1 bit (FixedBitSet) per document in the index
>  
> So the higher the cardinality of the thing you’re grouping on, and the more 
> documents in the index, the more memory gets consumed per query. Since high 
> cardinality and large indexes are the use-cases CollapseQParserPlugin was 
> designed for, I thought I'd point this out.
> My real issue is that this does not vary based on the number of results in 
> the query, either before or after collapsing, so a query that results in one 
> doc consumes the same amount of memory as one that returns all of them. All 
> of the Collectors suffer from this to some degree, but I think OrdScore is 
> the worst offender.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9125) CollapseQParserPlugin allocations are index based, not query based

2016-05-17 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287265#comment-15287265
 ] 

Joel Bernstein edited comment on SOLR-9125 at 5/17/16 6:49 PM:
---

One approach that might work for switching to primitive maps, would be first to 
estimate the cardinality of the collapse values in the result set using 
hyperloglog, and then sizing the primitive map accordingly. But my guess is 
this approach is going to really hurt performance. 




was (Author: joel.bernstein):
One approach that might work for switching to primitive maps, would be first to 
estimate the cardinality of the collapse values in the result set using 
hyperloglog, and then sizing the primitive map accordingly. But my guess is 
this approach is going really hurt performance quite a bit. 



> CollapseQParserPlugin allocations are index based, not query based
> --
>
> Key: SOLR-9125
> URL: https://issues.apache.org/jira/browse/SOLR-9125
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Reporter: Jeff Wartes
>Priority: Minor
>  Labels: collapsingQParserPlugin
>
> Among other things, CollapsingQParserPlugin’s OrdScoreCollector allocates 
> space per-query for: 
> 1 int (doc id) per ordinal
> 1 float (score) per ordinal
> 1 bit (FixedBitSet) per document in the index
>  
> So the higher the cardinality of the thing you’re grouping on, and the more 
> documents in the index, the more memory gets consumed per query. Since high 
> cardinality and large indexes are the use-cases CollapseQParserPlugin was 
> designed for, I thought I'd point this out.
> My real issue is that this does not vary based on the number of results in 
> the query, either before or after collapsing, so a query that results in one 
> doc consumes the same amount of memory as one that returns all of them. All 
> of the Collectors suffer from this to some degree, but I think OrdScore is 
> the worst offender.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9118) HashQParserPlugin should trim partition keys

2016-05-17 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9118:
-
Fix Version/s: master (7.0)
   6.1

> HashQParserPlugin should trim partition keys
> 
>
> Key: SOLR-9118
> URL: https://issues.apache.org/jira/browse/SOLR-9118
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-9118.patch
>
>
> Currently the HashQParserPlugin doesn't trim the partition keys, so having 
> spaces in the comma delimited list of keys causes an NPE while loading the 
> field from the schema. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9118) HashQParserPlugin should trim partition keys

2016-05-17 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9118.
--
Resolution: Implemented

> HashQParserPlugin should trim partition keys
> 
>
> Key: SOLR-9118
> URL: https://issues.apache.org/jira/browse/SOLR-9118
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9118.patch
>
>
> Currently the HashQParserPlugin doesn't trim the partition keys, so having 
> spaces in the comma delimited list of keys causes an NPE while loading the 
> field from the schema. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9118) HashQParserPlugin should trim partition keys

2016-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287217#comment-15287217
 ] 

ASF subversion and git services commented on SOLR-9118:
---

Commit 93201cd07e95620c73cf4a88106df6b7343cdf44 in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=93201cd ]

SOLR-9118: HashQParserPlugin should trim partition keys


> HashQParserPlugin should trim partition keys
> 
>
> Key: SOLR-9118
> URL: https://issues.apache.org/jira/browse/SOLR-9118
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9118.patch
>
>
> Currently the HashQParserPlugin doesn't trim the partition keys, so having 
> spaces in the comma delimited list of keys causes an NPE while loading the 
> field from the schema. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9118) HashQParserPlugin should trim partition keys

2016-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287218#comment-15287218
 ] 

ASF subversion and git services commented on SOLR-9118:
---

Commit b1b90152905774ba67a5d1675f939454b1e96b5f in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b1b9015 ]

SOLR-9118: Update CHANGES.txt

Conflicts:
solr/CHANGES.txt


> HashQParserPlugin should trim partition keys
> 
>
> Key: SOLR-9118
> URL: https://issues.apache.org/jira/browse/SOLR-9118
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9118.patch
>
>
> Currently the HashQParserPlugin doesn't trim the partition keys, so having 
> spaces in the comma delimited list of keys causes an NPE while loading the 
> field from the schema. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9125) CollapseQParserPlugin allocations are index based, not query based

2016-05-17 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287208#comment-15287208
 ] 

Joel Bernstein commented on SOLR-9125:
--

Yeah, the CollapsingQParsePlugin can use a lot of memory. The original design 
goal was to increase performance for collapsing on high cardinality fields and 
large result sets, as opposed to large indexes. It was really designed to 
support fast collapse queries on large e-commerce catalogs which are still 
typically small compared to other data sets.

If we can find a way to maintain the performance and shrink the memory usage 
this would be a great thing. 



> CollapseQParserPlugin allocations are index based, not query based
> --
>
> Key: SOLR-9125
> URL: https://issues.apache.org/jira/browse/SOLR-9125
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Reporter: Jeff Wartes
>Priority: Minor
>  Labels: collapsingQParserPlugin
>
> Among other things, CollapsingQParserPlugin’s OrdScoreCollector allocates 
> space per-query for: 
> 1 int (doc id) per ordinal
> 1 float (score) per ordinal
> 1 bit (FixedBitSet) per document in the index
>  
> So the higher the cardinality of the thing you’re grouping on, and the more 
> documents in the index, the more memory gets consumed per query. Since high 
> cardinality and large indexes are the use-cases CollapseQParserPlugin was 
> designed for, I thought I'd point this out.
> My real issue is that this does not vary based on the number of results in 
> the query, either before or after collapsing, so a query that results in one 
> doc consumes the same amount of memory as one that returns all of them. All 
> of the Collectors suffer from this to some degree, but I think OrdScore is 
> the worst offender.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-7925) Implement indexing from gzip format file

2016-05-17 Thread Wendy Tao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wendy Tao updated SOLR-7925:

Comment: was deleted

(was: Hi Song,

I am interested in applying SOLR-7925.patch to solr 5.3 for indexing .xml.gz 
file. Could you let me know which solr project or solr package or solr .jar 
file I should apply the patch to ?  Thanks! --Wendy
)

> Implement indexing from gzip format file
> 
>
> Key: SOLR-7925
> URL: https://issues.apache.org/jira/browse/SOLR-7925
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 5.2.1
>Reporter: Song Hyonwoo
>Priority: Minor
>  Labels: patch
> Attachments: SOLR-7925.patch
>
>
> This will support the update of gzipped format file of Json, Xml and CSV.
> The request path will use "update/compress/gzip" instead of "update" with 
> "update.contentType" parameter  and  "Content-Type: application/gzip" as 
> Header field.
> The following is sample request using curl command. (use not --data but 
> --data-binary)
> curl 
> "http://localhost:8080/solr/collection1/update/compress/gzip?update.contentType=application/json=true;
>  -H 'Content-Type: application/gzip' --data-binary @data.json.gz
> To activate this function need to add following request handler information 
> to solrconfig.xml
>class="org.apache.solr.handler.CompressedUpdateRequestHandler">
> 
>   application/gzip
> 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-6.x #62: POMs out of sync

2016-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-6.x/62/

No tests ran.

Build Log:
[...truncated 28055 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-6.x/build.xml:756: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-6.x/build.xml:299: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-6.x/lucene/build.xml:416: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-6.x/lucene/common-build.xml:1682:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-6.x/lucene/common-build.xml:595:
 Error deploying artifact 'org.apache.lucene:lucene-test-framework:jar': Error 
deploying artifact: Failed to transfer file: 
https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-test-framework/6.1.0-SNAPSHOT/lucene-test-framework-6.1.0-20160517.181622-58-sources.jar.md5.
 Return code is: 502

Total time: 9 minutes 38 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 65 - Still Failing

2016-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/65/

1 tests failed.
FAILED:  org.apache.lucene.index.TestIndexWriterReader.testDuringAddDelete

Error Message:
this IndexWriter is closed

Stack Trace:
org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:724)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:738)
at 
org.apache.lucene.index.IndexWriter.nrtIsCurrent(IndexWriter.java:4616)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenFromWriter(StandardDirectoryReader.java:287)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:266)
at 
org.apache.lucene.index.StandardDirectoryReader.doOpenIfChanged(StandardDirectoryReader.java:256)
at 
org.apache.lucene.index.DirectoryReader.openIfChanged(DirectoryReader.java:140)
at 
org.apache.lucene.index.TestIndexWriterReader.testDuringAddDelete(TestIndexWriterReader.java:870)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.nio.file.FileSystemException: 

[jira] [Commented] (SOLR-9118) HashQParserPlugin should trim partition keys

2016-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287136#comment-15287136
 ] 

ASF subversion and git services commented on SOLR-9118:
---

Commit f8d1012717620b4ed019fcf1e19e4f335fcbbe93 in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f8d1012 ]

SOLR-9118: HashQParserPlugin should trim partition keys


> HashQParserPlugin should trim partition keys
> 
>
> Key: SOLR-9118
> URL: https://issues.apache.org/jira/browse/SOLR-9118
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9118.patch
>
>
> Currently the HashQParserPlugin doesn't trim the partition keys, so having 
> spaces in the comma delimited list of keys causes an NPE while loading the 
> field from the schema. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9118) HashQParserPlugin should trim partition keys

2016-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287137#comment-15287137
 ] 

ASF subversion and git services commented on SOLR-9118:
---

Commit c3836a2a8339ecfed1988061ae1805fdf3bfa62b in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c3836a2 ]

SOLR-9118: Update CHANGES.txt


> HashQParserPlugin should trim partition keys
> 
>
> Key: SOLR-9118
> URL: https://issues.apache.org/jira/browse/SOLR-9118
> Project: Solr
>  Issue Type: Bug
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Attachments: SOLR-9118.patch
>
>
> Currently the HashQParserPlugin doesn't trim the partition keys, so having 
> spaces in the comma delimited list of keys causes an NPE while loading the 
> field from the schema. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9124) Grouped Results does not support ExactStatsCache

2016-05-17 Thread Antony Scerri (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286826#comment-15286826
 ] 

Antony Scerri edited comment on SOLR-9124 at 5/17/16 5:34 PM:
--

The attached patch extends upon SOLR-9122 and SOLR-9123 to support use of 
ExactStatsCache with grouped results. The test cases have been extended to 
demonstrate the problem with grouped results and then fix the problem by 
enabling it in the query evaluation phases.

Because the code branches used for grouped and ungrouped queries differs for 
the common elements such as sorting this still lead to an unfortunate scenario 
where documents with the same score are not ordered across and within groups in 
the same manner (eg shard index and then position in results as secondary and 
tertiary sorting criteria). This means some of the test cases are resorting 
items to look for the correct scores but cannot rely upon the native sorting.

The patch is against the 5.x branch and built upon the patch with SOLR-9122 and 
SOLR-9123 but should be straightforward to apply to the master. 



was (Author: antonyscerri):
The attached patch extends upon SOLR-1922 and SOLR-1923 to support use of 
ExactStatsCache with grouped results. The test cases have been extended to 
demonstrate the problem with grouped results and then fix the problem by 
enabling it in the query evaluation phases.

Because the code branches used for grouped and ungrouped queries differs for 
the common elements such as sorting this still lead to an unfortunate scenario 
where documents with the same score are not ordered across and within groups in 
the same manner (eg shard index and then position in results as secondary and 
tertiary sorting criteria). This means some of the test cases are resorting 
items to look for the correct scores but cannot rely upon the native sorting.

The patch is against the 5.x branch and built upon the patch with SOLR-9122 and 
SOLR-1923 but should be straightforward to apply to the master. 


> Grouped Results does not support ExactStatsCache
> 
>
> Key: SOLR-9124
> URL: https://issues.apache.org/jira/browse/SOLR-9124
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5, 6.0
>Reporter: Antony Scerri
>  Labels: ExactStatsCache, Grouping
> Fix For: 5.5.1, 6.1
>
> Attachments: SOLR-9124.patch
>
>
> When using grouped results and trying to use ExactStatsCache it has no 
> effect. The grouped by code branches off and doesn't incorporate those steps 
> in the evaluation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-05-17 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287052#comment-15287052
 ] 

Ishan Chattopadhyaya commented on SOLR-5944:


bq. Yeah, I've stopped using the Reply thing because of this
I see.. I'll consider stopping the use of the reply feature now :-) Sorry for 
the confusion.

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8774) Store the latest available version of a blob so that some components can access it w/o version

2016-05-17 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287036#comment-15287036
 ] 

Noble Paul commented on SOLR-8774:
--

The configuration of plugins does not need the version number. the 
{{}} tag requires the version number

But yes, that is the plan. You should be able to just specify the blob name w/o 
the version number and it would load the latest blob.  

> Store the latest available version of a blob so that some components can 
> access it w/o version
> --
>
> Key: SOLR-8774
> URL: https://issues.apache.org/jira/browse/SOLR-8774
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> All blob store access requires the blob name and version. If we store the 
> version and update that doc all the time it will be able to fetch the latest 
> version using realtime get



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-6.0 - Build # 11 - Failure

2016-05-17 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.0/11/

No tests ran.

Build Log:
[...truncated 14 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Command "git -c core.askpass=true 
fetch --tags --progress git://git.apache.org/lucene-solr.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: read error: Connection reset by peer

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1693)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1441)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:62)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:313)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:120)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to lucene(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:220)
at hudson.remoting.Channel.call(Channel.java:781)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
at sun.reflect.GeneratedMethodAccessor615.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
at com.sun.proxy.$Proxy132.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:764)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://git.apache.org/lucene-solr.git +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)

[jira] [Updated] (SOLR-5944) Support updates of numeric DocValues

2016-05-17 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-5944:
---
Attachment: SOLR-5944.patch

Combined Justin's fixes, Hoss' fixes and Noble's fixes (which were already 
there), updated to master and committed to the solr_5944 branch 
(https://github.com/chatman/lucene-solr/tree/solr_5944). Attached the patch for 
the same here.

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9126) shutdownAndAwaitTermination interruption handling faulty?

2016-05-17 Thread David Smiley (JIRA)
David Smiley created SOLR-9126:
--

 Summary: shutdownAndAwaitTermination interruption handling faulty?
 Key: SOLR-9126
 URL: https://issues.apache.org/jira/browse/SOLR-9126
 Project: Solr
  Issue Type: Bug
Reporter: David Smiley


I'm looking at ExecutorUtil.shutdownAndAwaitTermination:
{code:java}
public static void shutdownAndAwaitTermination(ExecutorService pool) {
pool.shutdown(); // Disable new tasks from being submitted
boolean shutdown = false;
while (!shutdown) {
  try {
// Wait a while for existing tasks to terminate
shutdown = pool.awaitTermination(60, TimeUnit.SECONDS);
  } catch (InterruptedException ie) {
// Preserve interrupt status
Thread.currentThread().interrupt();
  }
}
  }
{code}
If the current thread calling this method is interrupted, this loop will loop 
forever since awaitTermination will likely keep throwing InterruptedException.  
If InterruptedException isn't going to be propagated, this method should 
return, and probably log that it was interrupted prior to termination.
disclaimer: this is purely from inspection; I haven't seen this happen.
nitpick: the shutdown boolean is needless, simply do if (pool.await...) return;



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7925) Implement indexing from gzip format file

2016-05-17 Thread Wendy Tao (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15287001#comment-15287001
 ] 

Wendy Tao commented on SOLR-7925:
-

Hi Song,

I am interested in applying SOLR-7925.patch to solr 5.3 for indexing .xml.gz 
file. Could you let me know which solr project or solr package or solr .jar 
file I should apply the patch to ?  Thanks! --Wendy


> Implement indexing from gzip format file
> 
>
> Key: SOLR-7925
> URL: https://issues.apache.org/jira/browse/SOLR-7925
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 5.2.1
>Reporter: Song Hyonwoo
>Priority: Minor
>  Labels: patch
> Attachments: SOLR-7925.patch
>
>
> This will support the update of gzipped format file of Json, Xml and CSV.
> The request path will use "update/compress/gzip" instead of "update" with 
> "update.contentType" parameter  and  "Content-Type: application/gzip" as 
> Header field.
> The following is sample request using curl command. (use not --data but 
> --data-binary)
> curl 
> "http://localhost:8080/solr/collection1/update/compress/gzip?update.contentType=application/json=true;
>  -H 'Content-Type: application/gzip' --data-binary @data.json.gz
> To activate this function need to add following request handler information 
> to solrconfig.xml
>class="org.apache.solr.handler.CompressedUpdateRequestHandler">
> 
>   application/gzip
> 
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-05-17 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286995#comment-15286995
 ] 

Steve Rowe commented on SOLR-5944:
--

bq. but using the "Reply" feature so it appears inline.

Yeah, I've stopped using the Reply thing because of this - you can't find all 
the new posts at the bottom if people use this misfeature (as Muir called it).

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-05-17 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286987#comment-15286987
 ] 

Hoss Man commented on SOLR-5944:


oh ... wait ... i see now, it was posted *after* my other comments/attachments 
... but using the "Reply" feature so it appears inline.

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-05-17 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286986#comment-15286986
 ] 

Hoss Man commented on SOLR-5944:


bq. https://github.com/chatman/lucene-solr/tree/solr_5944 Noble's last commit 
is: 4572983839e3943b7dea52a8a2d55aa2b3b5ca3a

Ugh ... somehow i completley missed seeing this comment yesterday,  I don't 
know why i couldn't find that branch on github.

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8988) Improve facet.method=fcs performance in SolrCloud

2016-05-17 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286979#comment-15286979
 ] 

Hoss Man commented on SOLR-8988:


I haven't had time to review it enough to be confident enough that I'd want to 
commit it myself -- but if you have then go for it, i'm +0.

My one bit of feedback fro ma quick skim of the patch is that i don't 
understand the javadocs for "FACET_DISTRIB_MCO" at all ... it's a boolean 
param, but the docs describe it as " The default mincount to request on 
distributed facet queries" which makes it sound like a number, and the "Default 
values" bit of the javadocs don't relaly do anything to clarify that confusion 
since they also (appear to) talk about the (eventual) distributed mincount, and 
not the default value of the "FACET_DISTRIB_MCO" param itself

> Improve facet.method=fcs performance in SolrCloud
> -
>
> Key: SOLR-8988
> URL: https://issues.apache.org/jira/browse/SOLR-8988
> Project: Solr
>  Issue Type: Improvement
>Reporter: Keith Laban
> Attachments: SOLR-8988.patch, SOLR-8988.patch, Screen Shot 2016-04-25 
> at 2.54.47 PM.png, Screen Shot 2016-04-25 at 2.55.00 PM.png
>
>
> This relates to SOLR-8559 -- which improves the algorithm used by fcs 
> faceting when {{facet.mincount=1}}
> This patch allows {{facet.mincount}} to be sent as 1 for distributed queries. 
> As far as I can tell there is no reason to set {{facet.mincount=0}} for 
> refinement purposes . After trying to make sense of all the refinement logic, 
> I cant see how the difference between _no value_ and _value=0_ would have a 
> negative effect.
> *Test perf:*
> - ~15million unique terms
> - query matches ~3million documents
> *Params:*
> {code}
> facet.mincount=1
> facet.limit=500
> facet.method=fcs
> facet.sort=count
> {code}
> *Average Time Per Request:*
> - Before patch:  ~20seconds
> - After patch: <1 second
> *Note*: all tests pass and in my test, the output was identical before and 
> after patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8970) SSLTestConfig behaves really stupid if keystore can't be found

2016-05-17 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286973#comment-15286973
 ] 

Steve Rowe commented on SOLR-8970:
--

Manually adding info here for the branch_6x commit I did for the IntelliJ 
fixes, which for some reason didn't get autocommented here - from the commit 
email to commits@l.a.o:

Repository: lucene-solr
Updated Branches:
 refs/heads/branch_6x f73997bb4 -> 29e7d64da

SOLR-8970: IntelliJ config: add src/resources/ as a java-resource dir to the 
solr-test-framework module, so that resources there get copied into the 
compilation output dir.

Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/29e7d64d
Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/29e7d64d
Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/29e7d64d

Branch: refs/heads/branch_6x
Commit: 29e7d64da14a78bf8f1173a01d1553f69a27e9c7
Parents: f73997b
Author: Steve Rowe 
Authored: Mon May 16 20:55:32 2016 -0400
Committer: Steve Rowe 
Committed: Mon May 16 20:56:15 2016 -0400


> SSLTestConfig behaves really stupid if keystore can't be found
> --
>
> Key: SOLR-8970
> URL: https://issues.apache.org/jira/browse/SOLR-8970
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 6.1, master (7.0)
>
> Attachments: SOLR-8970.patch, SOLR-8970.patch, SOLR-8970.patch
>
>
> The SSLTestConfig constructor lets the call (notable SolrTestCaseJ4) tell it 
> wether clientAuth should be used (note SolrTestCaseJ4 calls this boolean 
> "trySslClientAuth") but it has a hardcoded assumption that the keystore file 
> it can use (for both the keystore and the truststore) will exist at a fixed 
> path in the solr install.
> when this works, it works fine - but if end users subclass/reuse 
> SolrTestCaseJ4 in their own projects, they may do so in a way that prevents 
> the SSLTestConfig keystore assumptions from being true, and yet they won't 
> get any sort of clear error.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_92) - Build # 5849 - Still Failing!

2016-05-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5849/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testStateWatcherChecksCurrentStateOnRegister

Error Message:
CollectionStateWatcher should be retained expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: CollectionStateWatcher should be retained 
expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([2E92F10715ECC486:4096CE397CE285C4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testStateWatcherChecksCurrentStateOnRegister(TestCollectionStateWatchers.java:135)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:

Re: lucene-solr:master: Move non-inner classes to separate files: This breaks IDEs and update checks by javac

2016-05-17 Thread Ryan Josal
Uwe, I agree keeping pkg private is the simplest and safest approach, and
yes a static inner class would solve the problem of JSONWriter, though for
consistency's sake, JSONWriter feels like it should become a top level
public class like it's counterpart XMLWriter.

Ryan

On Monday, May 16, 2016, Uwe Schindler  wrote:

> Hi,
>
>
>
> I think there was an issue open about that years ago, but I did not catch
> all of those when fixing it. And the facet ones are new.
>
>
>
> When I have time, I will look into refactoring those classes, but there
> are 2 possibilities:
>
>
>
> -  Move them as static inner classes. This allows to make them
> public or otherwise visible (I think that is Ryan Josal’s problem).
>
> -  Just keep them pkg-private and move them to separate files.
> That’s easier and faster to do. This is what I have done because of the
> Eclipse problem.
>
>
>
> I prefer the second one, as it keeps API the same and does not change
> anything.
>
>
>
> I have no idea, if there is a way to detect those classes via a static
> tool.
>
>
>
> Uwe
>
>
>
> -
>
> Uwe Schindler
>
> H.-H.-Meier-Allee 63, D-28213 Bremen
>
> http://www.thetaphi.de
>
> eMail: u...@thetaphi.de 
>
>
>
> *From:* rjo...@gmail.com
>  [mailto:
> rjo...@gmail.com ] *On
> Behalf Of *Ryan Josal
> *Sent:* Monday, May 16, 2016 11:43 PM
> *To:* dev@lucene.apache.org
> 
> *Subject:* Re: lucene-solr:master: Move non-inner classes to separate
> files: This breaks IDEs and update checks by javac
>
>
>
> +1 it's a pain for plugin development too.  Extending JSONResponseWriter
> comes to mind.
>
> On Monday, May 16, 2016, Chris Hostetter  > wrote:
>
>
> : I found out that there are more of those in the facets module. Can we
> : change those to be real *inner* classes or put them in separate files?
>
> +1 ... it's a really obnoxiou missfeature of java in my opinion ... are
> there any static tools we can enable to fail the build for classes like
> these?
>
>
>
> : -
> : Uwe Schindler
> : H.-H.-Meier-Allee 63, D-28213 Bremen
> : http://www.thetaphi.de
> : eMail: u...@thetaphi.de
> :
> : > -Original Message-
> : > From: uschind...@apache.org [mailto:uschind...@apache.org]
> : > Sent: Monday, May 16, 2016 7:54 PM
> : > To: comm...@lucene.apache.org
> : > Subject: lucene-solr:master: Move non-inner classes to separate files:
> This
> : > breaks IDEs and update checks by javac
> : >
> : > Repository: lucene-solr
> : > Updated Branches:
> : >   refs/heads/master 6620fd142 -> ae93f4e7a
> : >
> : >
> : > Move non-inner classes to separate files: This breaks IDEs and update
> checks
> : > by javac
> : >
> : >
> : > Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> : > Commit: http://git-wip-us.apache.org/repos/asf/lucene-
> : > solr/commit/ae93f4e7
> : > Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/ae93f4e7
> : > Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/ae93f4e7
> : >
> : > Branch: refs/heads/master
> : > Commit: ae93f4e7ac6a3908046391de35d4f50a0d3c59ca
> : > Parents: 6620fd1
> : > Author: Uwe Schindler 
> : > Authored: Mon May 16 19:54:10 2016 +0200
> : > Committer: Uwe Schindler 
> : > Committed: Mon May 16 19:54:10 2016 +0200
> : >
> : > --
> : >  .../solr/search/facet/UniqueMultiDvSlotAcc.java |  86 ++
> : >  .../search/facet/UniqueMultivaluedSlotAcc.java  |  69 
> : >  .../search/facet/UniqueSinglevaluedSlotAcc.java |  81 +
> : >  .../apache/solr/search/facet/UniqueSlotAcc.java | 165
> ---
> : >  4 files changed, 236 insertions(+), 165 deletions(-)
> : > --
> : >
> : >
> : > http://git-wip-us.apache.org/repos/asf/lucene-
> : >
> solr/blob/ae93f4e7/solr/core/src/java/org/apache/solr/search/facet/Unique
> : > MultiDvSlotAcc.java
> : > --
> : > diff --git
> : >
> a/solr/core/src/java/org/apache/solr/search/facet/UniqueMultiDvSlotAcc.ja
> : > va
> : >
> b/solr/core/src/java/org/apache/solr/search/facet/UniqueMultiDvSlotAcc.ja
> : > va
> : > new file mode 100644
> : > index 000..4c29753
> : > --- /dev/null
> : > +++
> : >
> b/solr/core/src/java/org/apache/solr/search/facet/UniqueMultiDvSlotAcc.ja
> : > va
> : > @@ -0,0 +1,86 @@
> : > +/*
> : > + * Licensed to the Apache Software Foundation (ASF) under one or more
> : > + * contributor license agreements.  See the NOTICE file distributed
> with
> : > + * this work for additional information regarding copyright ownership.
> : > + * The 

[jira] [Commented] (SOLR-9125) CollapseQParserPlugin allocations are index based, not query based

2016-05-17 Thread Jeff Wartes (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286940#comment-15286940
 ] 

Jeff Wartes commented on SOLR-9125:
---

I messed around a little bit, but I don't have a solution for this. I thought 
I'd file the issue anyway just to shine some light.

I had attempted to use CollapseQParserPlugin on a very large index using a 
collapse on a field whose cardinality was about 1/7th the doc count... it 
didn't go well. Worse, the issue didn't come up until pretty late in the game, 
because at low query rate and/or on smaller indexes, the problem isn't evident. 
I abandoned the attempt.

Some stuff I tried:

- I thought about replacing the FBS with a DocIdSetBuilder, but 
DelegatingCollector.finish() gets called twice, and you can't 
DocIdSetBuilder.build() twice on the same builder. We'd need to save the first 
build() result and use it to initialize a new builder for the second, but I 
wasn't convinced I understood the distinction between the two passes.
- I did one quick test where I replaced the "ords" and "scores" arrays with an 
IntIntScatterMap IntFloatScatterMap, thinking those would work better for small 
result sets. That ended up being worse (from a total allocations standpoint) 
for the queries I was trying, probably due to the map resizing necessary. It 
might be possible to set initial size values from statistics and help this case 
that way. It would also be possible to encode the docId/score into a long and 
just use one IntLongScatterMap, but I didn't try that.

> CollapseQParserPlugin allocations are index based, not query based
> --
>
> Key: SOLR-9125
> URL: https://issues.apache.org/jira/browse/SOLR-9125
> Project: Solr
>  Issue Type: Improvement
>  Components: query parsers
>Reporter: Jeff Wartes
>Priority: Minor
>  Labels: collapsingQParserPlugin
>
> Among other things, CollapsingQParserPlugin’s OrdScoreCollector allocates 
> space per-query for: 
> 1 int (doc id) per ordinal
> 1 float (score) per ordinal
> 1 bit (FixedBitSet) per document in the index
>  
> So the higher the cardinality of the thing you’re grouping on, and the more 
> documents in the index, the more memory gets consumed per query. Since high 
> cardinality and large indexes are the use-cases CollapseQParserPlugin was 
> designed for, I thought I'd point this out.
> My real issue is that this does not vary based on the number of results in 
> the query, either before or after collapsing, so a query that results in one 
> doc consumes the same amount of memory as one that returns all of them. All 
> of the Collectors suffer from this to some degree, but I think OrdScore is 
> the worst offender.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9125) CollapseQParserPlugin allocations are index based, not query based

2016-05-17 Thread Jeff Wartes (JIRA)
Jeff Wartes created SOLR-9125:
-

 Summary: CollapseQParserPlugin allocations are index based, not 
query based
 Key: SOLR-9125
 URL: https://issues.apache.org/jira/browse/SOLR-9125
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Reporter: Jeff Wartes
Priority: Minor


Among other things, CollapsingQParserPlugin’s OrdScoreCollector allocates space 
per-query for: 
1 int (doc id) per ordinal
1 float (score) per ordinal
1 bit (FixedBitSet) per document in the index
 
So the higher the cardinality of the thing you’re grouping on, and the more 
documents in the index, the more memory gets consumed per query. Since high 
cardinality and large indexes are the use-cases CollapseQParserPlugin was 
designed for, I thought I'd point this out.

My real issue is that this does not vary based on the number of results in the 
query, either before or after collapsing, so a query that results in one doc 
consumes the same amount of memory as one that returns all of them. All of the 
Collectors suffer from this to some degree, but I think OrdScore is the worst 
offender.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9117) Leaking the first SolrCore after reload

2016-05-17 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286907#comment-15286907
 ] 

Erick Erickson commented on SOLR-9117:
--

[~shalinmangar] I assigned this to myself just so I don't lose track of it, not 
to take it away from you ;). Please go ahead and commit if you get to it before 
I do.


> Leaking the first SolrCore after reload
> ---
>
> Key: SOLR-9117
> URL: https://issues.apache.org/jira/browse/SOLR-9117
> Project: Solr
>  Issue Type: Bug
>Reporter: Jessica Cheng Mallet
>Assignee: Erick Erickson
>  Labels: core, leak
> Attachments: SOLR-9117.patch
>
>
> When a SolrCore for a particular index is created for the first time, it's 
> added to the SolrCores#createdCores map. However, this map doesn't get 
> updated when this core is reloaded, leading to the first SolrCore being 
> leaked.
> Taking a look at how createdCores is used, it seems like it doesn't serve any 
> purpose (its only read is in SolrCores#getAllCoreNames, which includes 
> entries from SolrCores.cores anyway), so I'm proposing a patch to remove the 
> createdCores map completely. However, if someone else knows that createdCores 
> exist for a reason, I'll be happy to change the fix to updating the 
> createdCores map when reload is called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9116) Race condition causing occasional SolrIndexSearcher leak when SolrCore is reloaded

2016-05-17 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286904#comment-15286904
 ] 

Erick Erickson commented on SOLR-9116:
--

I'll commit this in a day or two unless someone beats me to it. I'm traveling 
so can't quite promise when so if someone really wants to take it please do.

I confirmed this on 6.x both with and without the actual fix.


> Race condition causing occasional SolrIndexSearcher leak when SolrCore is 
> reloaded
> --
>
> Key: SOLR-9116
> URL: https://issues.apache.org/jira/browse/SOLR-9116
> Project: Solr
>  Issue Type: Bug
>Reporter: Jessica Cheng Mallet
>Assignee: Erick Erickson
>  Labels: leak, searcher
> Attachments: SOLR-9116.patch
>
>
> Fix a leak of SolrIndexSearcher when a SolrCore is reloaded. Added a test to 
> expose this leak when run in many iterations (pretty reliable failure with 
> iters=1K), which passes with the fix (ran iters=10K twice).
> The fundamental issue is that when an invocation of SolrCore#openNewSearcher 
> is racing with SolrCore#close, if this synchronized block 
> (https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/SolrCore.java#L1611)
>  in openNewSearcher doesn't check for whether or not the core is closed, it 
> can possibly run after the core runs closeSearcher and assign the newly 
> constructed searcher to realtimeSearcher again, which will never be cleaned 
> up. The fix is to check if the SolrCore is closed inside the synchronized 
> block, and if so, clean up the newly constructed searcher and throw an 
> Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9117) Leaking the first SolrCore after reload

2016-05-17 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-9117:


Assignee: Erick Erickson

> Leaking the first SolrCore after reload
> ---
>
> Key: SOLR-9117
> URL: https://issues.apache.org/jira/browse/SOLR-9117
> Project: Solr
>  Issue Type: Bug
>Reporter: Jessica Cheng Mallet
>Assignee: Erick Erickson
>  Labels: core, leak
> Attachments: SOLR-9117.patch
>
>
> When a SolrCore for a particular index is created for the first time, it's 
> added to the SolrCores#createdCores map. However, this map doesn't get 
> updated when this core is reloaded, leading to the first SolrCore being 
> leaked.
> Taking a look at how createdCores is used, it seems like it doesn't serve any 
> purpose (its only read is in SolrCores#getAllCoreNames, which includes 
> entries from SolrCores.cores anyway), so I'm proposing a patch to remove the 
> createdCores map completely. However, if someone else knows that createdCores 
> exist for a reason, I'll be happy to change the fix to updating the 
> createdCores map when reload is called.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9116) Race condition causing occasional SolrIndexSearcher leak when SolrCore is reloaded

2016-05-17 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-9116:


Assignee: Erick Erickson

> Race condition causing occasional SolrIndexSearcher leak when SolrCore is 
> reloaded
> --
>
> Key: SOLR-9116
> URL: https://issues.apache.org/jira/browse/SOLR-9116
> Project: Solr
>  Issue Type: Bug
>Reporter: Jessica Cheng Mallet
>Assignee: Erick Erickson
>  Labels: leak, searcher
> Attachments: SOLR-9116.patch
>
>
> Fix a leak of SolrIndexSearcher when a SolrCore is reloaded. Added a test to 
> expose this leak when run in many iterations (pretty reliable failure with 
> iters=1K), which passes with the fix (ran iters=10K twice).
> The fundamental issue is that when an invocation of SolrCore#openNewSearcher 
> is racing with SolrCore#close, if this synchronized block 
> (https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/SolrCore.java#L1611)
>  in openNewSearcher doesn't check for whether or not the core is closed, it 
> can possibly run after the core runs closeSearcher and assign the newly 
> constructed searcher to realtimeSearcher again, which will never be cleaned 
> up. The fix is to check if the SolrCore is closed inside the synchronized 
> block, and if so, clean up the newly constructed searcher and throw an 
> Exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7258) Tune DocIdSetBuilder allocation rate

2016-05-17 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286858#comment-15286858
 ] 

David Smiley commented on LUCENE-7258:
--

nit: typo in comment line 96: "cumulated" -> "accumulated"

In ensureBufferCapacity, when buffers.isEmpty, I think the first buffer should 
have a minimum size of 64 (or 32?), not 1.  This will avoid a possible slow 
start of small buffers when numDocs is 0 or 1. At least I saw this while 
setting a breakpoint in some spatial tests, seeing the first two buffers both 
of size one and the 3rd of size two, etc.
Also in this method...
{code:java}
if (current.length < current.array.length - (current.array.length >>> 2)) {
  // current buffer is less than 7/8 full, resize rather than waste space
{code}
That calculation is not 7/8, it's 3/4.

In concat(), it'd be helpful to comment that it not only concatenates but 
leaves one additional space too.  Also... do you think it might be worth 
optimizing for the case that there is one buffer that can simply be returned?  
If when this happens it tends to be exactly full then maybe when we allocate 
new buffers we can leave that one additional slot there so that this happens 
more often.

For readability sake, can you re-order the methods grow, ensureBufferCapacity, 
and addBuffer, growBuffer, upgradeToBitSet to be in this sequence (or 
thereabouts) as that is the sequence of who calls who?  I find it much easier 
to read code top to bottom than bottom up :-)  Likewise, build() could be 
defined before the private utility methods it calls.

> Tune DocIdSetBuilder allocation rate
> 
>
> Key: LUCENE-7258
> URL: https://issues.apache.org/jira/browse/LUCENE-7258
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial
>Reporter: Jeff Wartes
> Attachments: 
> LUCENE-7258-Tune-memory-allocation-rate-for-Intersec.patch, 
> LUCENE-7258-Tune-memory-allocation-rate-for-Intersec.patch, 
> LUCENE-7258-expanding.patch, allocation_plot.jpg
>
>
> LUCENE-7211 converted IntersectsPrefixTreeQuery to use DocIdSetBuilder, but 
> didn't actually reduce garbage generation for my Solr index.
> Since something like 40% of my garbage (by space) is now attributed to 
> DocIdSetBuilder.growBuffer, I charted a few different allocation strategies 
> to see if I could tune things more. 
> See here: http://i.imgur.com/7sXLAYv.jpg 
> The jump-then-flatline at the right would be where DocIdSetBuilder gives up 
> and allocates a FixedBitSet for a 100M-doc index. (The 1M-doc index 
> curve/cutoff looked similar)
> Perhaps unsurprisingly, the 1/8th growth factor in ArrayUtil.oversize is 
> terrible from an allocation standpoint if you're doing a lot of expansions, 
> and is especially terrible when used to build a short-lived data structure 
> like this one.
> By the time it goes with the FBS, it's allocated around twice as much memory 
> for the buffer as it would have needed for just the FBS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_92) - Build # 16768 - Still Failing!

2016-05-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16768/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'CY val' for path 'response/params/y/c' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":0, "params":{"x":{ "a":"A val", "b":"B 
val", "":{"v":0},  from server:  http://127.0.0.1:42762/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val' for path 
'response/params/y/c' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":0,
"params":{"x":{
"a":"A val",
"b":"B val",
"":{"v":0},  from server:  http://127.0.0.1:42762/collection1
at 
__randomizedtesting.SeedInfo.seed([1C7F08B310D394D0:942B3769BE2FF928]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:457)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:160)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Issues (with patches) using grouped result and exact stats

2016-05-17 Thread Scerri, Antony (ELS)
Hi

As part of a project using grouped results, we looked at using a sharded index. 
The first thing the team noted was some of our application tests failing which 
was due to the term distribution across the shards. However switching on 
ExactStatsCache didn't help. This was because the grouped results feature uses 
separate code paths for large parts of its functionality exact stats wasn't 
enabled. Attempting to resolve this uncovered a couple of other issues with 
debug explain plans not using exact stats either which results in the 
information being misleading, and all this it turned out was based on a minor 
problem with the exact stats not correctly distributing term frequencies in all 
cases (highly dependant upon your document distribution of course).

So I have registered three bugs (listed below) for these issues, in reverse 
order to the descriptions above as I went back through tackling the primary 
cause first and creating patches with test cases for each. Note I did these 
against the 5.x branch because whilst attempting to apply to the master I 
couldn't get the test case behaviours to work. After going back to 5.x to where 
I had originally worked through the fixes I finally determined the use of 
caching in the test case environment was the problem. I believe applying the 
changes to master based on where I was at a few months ago should be fairly 
straightforward, sadly I haven't had time to revisit this. Also because of the 
nature of the relationship between the issues the patches linked to the Jira 
issues are dependant upon the preceding issues patch (hopefully this isn't too 
much of an issue).

SOLR-9122 - ExactStatsCache doesn't share all stats
SOLR-1923 - Explain plans not using ExactStatsCache in debug mode
SOLR-1924 - Grouped Results does not support ExactStatsCache

It is worth noting that this will of course have subtle changes in behaviour, 
and potentially some performance overhead in some cases depending on how the 
features have been used.

Hopefully these changes will be accepted as-is but should any queries arise 
I'll attempt to answer as necessary.

Tony

Antony Scerri
Lead Architect, Elsevier




Elsevier Limited. Registered Office: The Boulevard, Langford Lane, Kidlington, 
Oxford, OX5 1GB, United Kingdom, Registration No. 1982084, Registered in 
England and Wales.


[jira] [Updated] (SOLR-9124) Grouped Results does not support ExactStatsCache

2016-05-17 Thread Antony Scerri (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antony Scerri updated SOLR-9124:

Summary: Grouped Results does not support ExactStatsCache  (was: Grouped 
Results does not use ExactStatsCache)

> Grouped Results does not support ExactStatsCache
> 
>
> Key: SOLR-9124
> URL: https://issues.apache.org/jira/browse/SOLR-9124
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5, 6.0
>Reporter: Antony Scerri
>  Labels: ExactStatsCache, Grouping
> Fix For: 5.5.1, 6.1
>
> Attachments: SOLR-9124.patch
>
>
> When using grouped results and trying to use ExactStatsCache it has no 
> effect. The grouped by code branches off and doesn't incorporate those steps 
> in the evaluation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9124) Grouped Results does not use ExactStatsCache

2016-05-17 Thread Antony Scerri (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antony Scerri updated SOLR-9124:

Attachment: SOLR-9124.patch

The attached patch extends upon SOLR-1922 and SOLR-1923 to support use of 
ExactStatsCache with grouped results. The test cases have been extended to 
demonstrate the problem with grouped results and then fix the problem by 
enabling it in the query evaluation phases.

Because the code branches used for grouped and ungrouped queries differs for 
the common elements such as sorting this still lead to an unfortunate scenario 
where documents with the same score are not ordered across and within groups in 
the same manner (eg shard index and then position in results as secondary and 
tertiary sorting criteria). This means some of the test cases are resorting 
items to look for the correct scores but cannot rely upon the native sorting.

The patch is against the 5.x branch and built upon the patch with SOLR-9122 and 
SOLR-1923 but should be straightforward to apply to the master. 


> Grouped Results does not use ExactStatsCache
> 
>
> Key: SOLR-9124
> URL: https://issues.apache.org/jira/browse/SOLR-9124
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5, 6.0
>Reporter: Antony Scerri
>  Labels: ExactStatsCache, Grouping
> Fix For: 5.5.1, 6.1
>
> Attachments: SOLR-9124.patch
>
>
> When using grouped results and trying to use ExactStatsCache it has no 
> effect. The grouped by code branches off and doesn't incorporate those steps 
> in the evaluation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9124) Grouped Results does not use ExactStatsCache

2016-05-17 Thread Antony Scerri (JIRA)
Antony Scerri created SOLR-9124:
---

 Summary: Grouped Results does not use ExactStatsCache
 Key: SOLR-9124
 URL: https://issues.apache.org/jira/browse/SOLR-9124
 Project: Solr
  Issue Type: Bug
  Components: Server
Affects Versions: 6.0, 5.5
Reporter: Antony Scerri
 Fix For: 5.5.1, 6.1


When using grouped results and trying to use ExactStatsCache it has no effect. 
The grouped by code branches off and doesn't incorporate those steps in the 
evaluation. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9123) Explain plans not using ExactStatsCache in debug mode

2016-05-17 Thread Antony Scerri (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286810#comment-15286810
 ] 

Antony Scerri edited comment on SOLR-9123 at 5/17/16 3:12 PM:
--

The attached patch extends upon SOLR-9122 by extending the test coverage to 
compare explain plans. It also contains the fix for enabling use of 
ExactStatsCache information during the debug phase and generation of explain 
plans.

The patch is against the 5.x branch and built upon the patch with SOLR-9122 but 
should be straightforward to apply to the master. 


was (Author: antonyscerri):
The attached patch extends upon SOLR-1992 by extending the test coverage to 
compare explain plans. It also contains the fix for enabling use of 
ExactStatsCache information during the debug phase and generation of explain 
plans.

The patch is against the 5.x branch and built upon the patch with SOLR-1992 but 
should be straightforward to apply to the master. 

> Explain plans not using ExactStatsCache in debug mode
> -
>
> Key: SOLR-9123
> URL: https://issues.apache.org/jira/browse/SOLR-9123
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5, 6.0
>Reporter: Antony Scerri
>  Labels: Debug, Explain
> Fix For: 5.5.1, 6.1
>
> Attachments: SOLR-9123.patch
>
>
> When using ExactStatsCache and debug mode the explain plans don't match the 
> actual scores of the returned documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9123) Explain plans not using ExactStatsCache in debug mode

2016-05-17 Thread Antony Scerri (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antony Scerri updated SOLR-9123:

Attachment: SOLR-9123.patch

The attached patch extends upon SOLR-1992 by extending the test coverage to 
compare explain plans. It also contains the fix for enabling use of 
ExactStatsCache information during the debug phase and generation of explain 
plans.

The patch is against the 5.x branch and built upon the patch with SOLR-1992 but 
should be straightforward to apply to the master. 

> Explain plans not using ExactStatsCache in debug mode
> -
>
> Key: SOLR-9123
> URL: https://issues.apache.org/jira/browse/SOLR-9123
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5, 6.0
>Reporter: Antony Scerri
>  Labels: Debug, Explain
> Fix For: 5.5.1, 6.1
>
> Attachments: SOLR-9123.patch
>
>
> When using ExactStatsCache and debug mode the explain plans don't match the 
> actual scores of the returned documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9123) Explain plans not using ExactStatsCache in debug mode

2016-05-17 Thread Antony Scerri (JIRA)
Antony Scerri created SOLR-9123:
---

 Summary: Explain plans not using ExactStatsCache in debug mode
 Key: SOLR-9123
 URL: https://issues.apache.org/jira/browse/SOLR-9123
 Project: Solr
  Issue Type: Bug
  Components: Server
Affects Versions: 6.0, 5.5
Reporter: Antony Scerri
 Fix For: 5.5.1, 6.1


When using ExactStatsCache and debug mode the explain plans don't match the 
actual scores of the returned documents.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9122) ExactStatsCache doesn't share all stats

2016-05-17 Thread Antony Scerri (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Antony Scerri updated SOLR-9122:

Attachment: SOLR-9122.patch

The attached patch demonstrates the problem by adding more exhaustive tests, 
including disabling caching used in the old ones. The test also improved the 
information being checked and also removed some of the randomness built into 
the older tests when creating the indexes by now creating all permutations. It 
also contains a fix for the problem. There may be alternative approaches to 
solving this but may require more work in other areas. Additional logging has 
also been added.

The patch is against the 5.x branch but should be straightforward to apply to 
the master.


> ExactStatsCache doesn't share all stats
> ---
>
> Key: SOLR-9122
> URL: https://issues.apache.org/jira/browse/SOLR-9122
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.5, 6.0
>Reporter: Antony Scerri
>  Labels: ExactStatsCache
> Fix For: 5.5.1, 6.0.1
>
> Attachments: SOLR-9122.patch
>
>
> The exact stats cache doesn't distribute stats due to an restrictive 
> optimization which meant that in some cases document counts (required for 
> IDF) were not being sent back. This caused TF/IDF calculations to be missing 
> some information that leads to differences depending on distribution of 
> documents and terms in shards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9122) ExactStatsCache doesn't share all stats

2016-05-17 Thread Antony Scerri (JIRA)
Antony Scerri created SOLR-9122:
---

 Summary: ExactStatsCache doesn't share all stats
 Key: SOLR-9122
 URL: https://issues.apache.org/jira/browse/SOLR-9122
 Project: Solr
  Issue Type: Bug
  Components: Server
Affects Versions: 6.0, 5.5
Reporter: Antony Scerri
 Fix For: 5.5.1, 6.0.1


The exact stats cache doesn't distribute stats due to an restrictive 
optimization which meant that in some cases document counts (required for IDF) 
were not being sent back. This caused TF/IDF calculations to be missing some 
information that leads to differences depending on distribution of documents 
and terms in shards.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-797) Construct EmbeddedSolrServer response without serializing/parsing

2016-05-17 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286755#comment-15286755
 ] 

Mikhail Khludnev commented on SOLR-797:
---

Colleagues, what about flipping Commons IO to 2.5 and using [a really 
expandable 
buffer|://commons.apache.org/proper/commons-io/javadocs/api-release/src-html/org/apache/commons/io/output/ByteArrayOutputStream.html#line.336]

> Construct EmbeddedSolrServer response without serializing/parsing
> -
>
> Key: SOLR-797
> URL: https://issues.apache.org/jira/browse/SOLR-797
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 1.3
>Reporter: Jonathan Lee
>Priority: Minor
> Fix For: 4.9, 6.0
>
> Attachments: SOLR-797.patch, SOLR-797.patch, SOLR-797.patch
>
>
> Currently, the EmbeddedSolrServer serializes the response and reparses in 
> order to create the final NamedList response.  From the comment in 
> EmbeddedSolrServer.java, the goal is to:
> * convert the response directly into a named list



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9121) ant precommit fails on ant check-lib-versions

2016-05-17 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286732#comment-15286732
 ] 

Steve Rowe commented on SOLR-9121:
--

I'll resolve once Jenkins has succeeded, e.g. 
[http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16768/], which has the 
revision of the commit on this issue: be51726.

> ant precommit fails on ant check-lib-versions
> -
>
> Key: SOLR-9121
> URL: https://issues.apache.org/jira/browse/SOLR-9121
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
> Attachments: SOLR-9121-passthrough.patch, SOLR-9121.patch
>
>
> e.g.  http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16766/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 139 - Still Failing!

2016-05-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/139/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 47611 lines...]
BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/build.xml:740: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/build.xml:122: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build.xml:104: 
The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/tools/custom-tasks.xml:108:
 Exception reading 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/top-level-ivy-settings.xml:
 java.text.ParseException: failed to load settings from 
file:/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/top-level-ivy-settings.xml:
 io problem while parsing config file: 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/${ivysettings.xml}
 (No such file or directory)
at 
org.apache.ivy.core.settings.XmlSettingsParser.doParse(XmlSettingsParser.java:165)
at 
org.apache.ivy.core.settings.XmlSettingsParser.parse(XmlSettingsParser.java:150)
at org.apache.ivy.core.settings.IvySettings.load(IvySettings.java:391)
at org.apache.ivy.Ivy.configure(Ivy.java:416)
at 
org.apache.lucene.validation.LibVersionsCheckTask.setupIvy(LibVersionsCheckTask.java:698)
at 
org.apache.lucene.validation.LibVersionsCheckTask.execute(LibVersionsCheckTask.java:211)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at 
org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at 
org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:302)
at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at 
org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 

[jira] [Commented] (SOLR-9121) ant precommit fails on ant check-lib-versions

2016-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286675#comment-15286675
 ] 

ASF subversion and git services commented on SOLR-9121:
---

Commit 01ed4a5f7d837047306aaa37e0f4f2cdda8fb72a in lucene-solr's branch 
refs/heads/branch_6x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=01ed4a5 ]

SOLR-9121: Fix check-lib-versions task to pass through the "ivysettings.xml" 
property as an Ivy variable so that the nested ivy settings file can be located 
when parsing the top-level ivy settings file.


> ant precommit fails on ant check-lib-versions
> -
>
> Key: SOLR-9121
> URL: https://issues.apache.org/jira/browse/SOLR-9121
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
> Attachments: SOLR-9121-passthrough.patch, SOLR-9121.patch
>
>
> e.g.  http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16766/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7284) UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym Query Expansion)

2016-05-17 Thread Daniel Bigham (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286674#comment-15286674
 ] 

Daniel Bigham commented on LUCENE-7284:
---

Whoops, my apologies.




> UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym 
> Query Expansion)
> -
>
> Key: LUCENE-7284
> URL: https://issues.apache.org/jira/browse/LUCENE-7284
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Daniel Bigham
>Assignee: Alan Woodward
>Priority: Blocker
> Attachments: LUCENE-7284.patch
>
>
> I am trying to support synonyms on the query side by doing 
> query expansion.
> For example, the query "open webpage" can be expanded if the following 
> things are synonyms:
> "open" | "go to"
> This becomes the following: (I'm using both the stop word filter and the 
> stemming filter)
> {code}
> spanNear(
>  [
>  spanOr([Title:open, Title:go]),
>  Title:webpag
>  ],
>  0,
>  true
> )
> {code}
> Notice that "go to" became just "go", because apparently "to" is removed 
> by the stop word filter.
> Interestingly, if you turn "go to webpage" into a phrase, you get "go ? 
> webpage", but if you turn "go to" into a phrase, you just get "go", 
> because apparently a trailing stop word in a PhraseQuery gets dropped. 
> (there would actually be no way to represent the gap currently because 
> it represents gaps implicitly via the position of the phrase tokens, and 
> if there is no second token, there's no way to implicitly indicate that 
> there is a gap there)
> The above query then fails to match "go to webpage", because "go to 
> webpage" in the index tokenizes as "go _ webpage", and the query, 
> because it lost its gap, tried to only match "go webpage".
> To try and work around that, I represent "go to" not as a phrase, but as 
> a SpanNearQuery, like this:
> {code}
> spanNear(
>  [
>  spanOr(
>  [
>  Title:open,
>  spanNear([Title:go, SpanGap(:1)], 0, true),
>  ]
>  ),
>  Title:webpag
>  ],
>  0,
>  true
> )
> {code}
> However, when I run that query, I get the following:
> {code}
> A Java exception occurred: java.lang.UnsupportedOperationException
>  at 
> org.apache.lucene.search.spans.SpanNearQuery$GapSpans.positionsCost(SpanNearQuery.java:398)
>  at 
> org.apache.lucene.search.spans.ConjunctionSpans.asTwoPhaseIterator(ConjunctionSpans.java:96)
>  at 
> org.apache.lucene.search.spans.NearSpansOrdered.asTwoPhaseIterator(NearSpansOrdered.java:45)
>  at 
> org.apache.lucene.search.spans.ScoringWrapperSpans.asTwoPhaseIterator(ScoringWrapperSpans.java:88)
>  at 
> org.apache.lucene.search.ConjunctionDISI.addSpans(ConjunctionDISI.java:104)
>  at 
> org.apache.lucene.search.ConjunctionDISI.intersectSpans(ConjunctionDISI.java:82)
>  at 
> org.apache.lucene.search.spans.ConjunctionSpans.(ConjunctionSpans.java:41)
>  at 
> org.apache.lucene.search.spans.NearSpansOrdered.(NearSpansOrdered.java:54)
>  at 
> org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:232)
>  at 
> org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:134)
>  at org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:38)
>  at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
> {code}
> ... and when I look up that GapSpans class in SpanNearQuery.java, I see:
> {code}
> @Override
> public float positionsCost() {
>throw new UnsupportedOperationException();
> }
> {code}
> I asked this question on the mailing list on May 14 and was directed to 
> submit a bug here.
> This issue is of relatively high priority for us, since this represents the 
> most promising technique we have for supporting synonyms on top of Lucene. 
> (since the SynonymFilter suffers serious issues wrt multi-word synonyms)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_92) - Build # 676 - Still Failing!

2016-05-17 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/676/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 47696 lines...]
BUILD FAILED
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:740: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/build.xml:122: The following 
error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/build.xml:104: The 
following error occurred while executing this line:
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/tools/custom-tasks.xml:108:
 Exception reading 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/top-level-ivy-settings.xml:
 java.text.ParseException: failed to load settings from 
file:/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/top-level-ivy-settings.xml:
 io problem while parsing config file: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/lucene/${ivysettings.xml} (No 
such file or directory)
at 
org.apache.ivy.core.settings.XmlSettingsParser.doParse(XmlSettingsParser.java:165)
at 
org.apache.ivy.core.settings.XmlSettingsParser.parse(XmlSettingsParser.java:150)
at org.apache.ivy.core.settings.IvySettings.load(IvySettings.java:391)
at org.apache.ivy.Ivy.configure(Ivy.java:416)
at 
org.apache.lucene.validation.LibVersionsCheckTask.setupIvy(LibVersionsCheckTask.java:698)
at 
org.apache.lucene.validation.LibVersionsCheckTask.execute(LibVersionsCheckTask.java:211)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.taskdefs.Sequential.execute(Sequential.java:68)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at 
org.apache.tools.ant.taskdefs.MacroInstance.execute(MacroInstance.java:398)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at 
org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:302)
at org.apache.tools.ant.taskdefs.SubAnt.execute(SubAnt.java:221)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106)
at org.apache.tools.ant.Task.perform(Task.java:348)
at org.apache.tools.ant.Target.execute(Target.java:390)
at org.apache.tools.ant.Target.performTasks(Target.java:411)
at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1399)
at 
org.apache.tools.ant.helper.SingleCheckExecutor.executeTargets(SingleCheckExecutor.java:38)
at org.apache.tools.ant.Project.executeTargets(Project.java:1251)
at org.apache.tools.ant.taskdefs.Ant.execute(Ant.java:442)
at org.apache.tools.ant.taskdefs.CallTarget.execute(CallTarget.java:105)
at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:291)
at sun.reflect.GeneratedMethodAccessor4.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 

[jira] [Commented] (SOLR-9121) ant precommit fails on ant check-lib-versions

2016-05-17 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286672#comment-15286672
 ] 

ASF subversion and git services commented on SOLR-9121:
---

Commit be5172631d9da0ec4ba0e501c4f964153d952d3b in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=be51726 ]

SOLR-9121: Fix check-lib-versions task to pass through the "ivysettings.xml" 
property as an Ivy variable so that the nested ivy settings file can be located 
when parsing the top-level ivy settings file.


> ant precommit fails on ant check-lib-versions
> -
>
> Key: SOLR-9121
> URL: https://issues.apache.org/jira/browse/SOLR-9121
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
> Attachments: SOLR-9121-passthrough.patch, SOLR-9121.patch
>
>
> e.g.  http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16766/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-7284) UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym Query Expansion)

2016-05-17 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward reopened LUCENE-7284:
---

Hi Daniel, we normally wait till the fix is committed before resolving the 
issue - I'll probably commit tomorrow morning.  Thanks for testing!

> UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym 
> Query Expansion)
> -
>
> Key: LUCENE-7284
> URL: https://issues.apache.org/jira/browse/LUCENE-7284
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Daniel Bigham
>Assignee: Alan Woodward
>Priority: Blocker
> Attachments: LUCENE-7284.patch
>
>
> I am trying to support synonyms on the query side by doing 
> query expansion.
> For example, the query "open webpage" can be expanded if the following 
> things are synonyms:
> "open" | "go to"
> This becomes the following: (I'm using both the stop word filter and the 
> stemming filter)
> {code}
> spanNear(
>  [
>  spanOr([Title:open, Title:go]),
>  Title:webpag
>  ],
>  0,
>  true
> )
> {code}
> Notice that "go to" became just "go", because apparently "to" is removed 
> by the stop word filter.
> Interestingly, if you turn "go to webpage" into a phrase, you get "go ? 
> webpage", but if you turn "go to" into a phrase, you just get "go", 
> because apparently a trailing stop word in a PhraseQuery gets dropped. 
> (there would actually be no way to represent the gap currently because 
> it represents gaps implicitly via the position of the phrase tokens, and 
> if there is no second token, there's no way to implicitly indicate that 
> there is a gap there)
> The above query then fails to match "go to webpage", because "go to 
> webpage" in the index tokenizes as "go _ webpage", and the query, 
> because it lost its gap, tried to only match "go webpage".
> To try and work around that, I represent "go to" not as a phrase, but as 
> a SpanNearQuery, like this:
> {code}
> spanNear(
>  [
>  spanOr(
>  [
>  Title:open,
>  spanNear([Title:go, SpanGap(:1)], 0, true),
>  ]
>  ),
>  Title:webpag
>  ],
>  0,
>  true
> )
> {code}
> However, when I run that query, I get the following:
> {code}
> A Java exception occurred: java.lang.UnsupportedOperationException
>  at 
> org.apache.lucene.search.spans.SpanNearQuery$GapSpans.positionsCost(SpanNearQuery.java:398)
>  at 
> org.apache.lucene.search.spans.ConjunctionSpans.asTwoPhaseIterator(ConjunctionSpans.java:96)
>  at 
> org.apache.lucene.search.spans.NearSpansOrdered.asTwoPhaseIterator(NearSpansOrdered.java:45)
>  at 
> org.apache.lucene.search.spans.ScoringWrapperSpans.asTwoPhaseIterator(ScoringWrapperSpans.java:88)
>  at 
> org.apache.lucene.search.ConjunctionDISI.addSpans(ConjunctionDISI.java:104)
>  at 
> org.apache.lucene.search.ConjunctionDISI.intersectSpans(ConjunctionDISI.java:82)
>  at 
> org.apache.lucene.search.spans.ConjunctionSpans.(ConjunctionSpans.java:41)
>  at 
> org.apache.lucene.search.spans.NearSpansOrdered.(NearSpansOrdered.java:54)
>  at 
> org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:232)
>  at 
> org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:134)
>  at org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:38)
>  at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
> {code}
> ... and when I look up that GapSpans class in SpanNearQuery.java, I see:
> {code}
> @Override
> public float positionsCost() {
>throw new UnsupportedOperationException();
> }
> {code}
> I asked this question on the mailing list on May 14 and was directed to 
> submit a bug here.
> This issue is of relatively high priority for us, since this represents the 
> most promising technique we have for supporting synonyms on top of Lucene. 
> (since the SynonymFilter suffers serious issues wrt multi-word synonyms)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9121) ant precommit fails on ant check-lib-versions

2016-05-17 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286661#comment-15286661
 ] 

Steve Rowe commented on SOLR-9121:
--

Thanks Christine, I'll go commit now.

> ant precommit fails on ant check-lib-versions
> -
>
> Key: SOLR-9121
> URL: https://issues.apache.org/jira/browse/SOLR-9121
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
> Attachments: SOLR-9121-passthrough.patch, SOLR-9121.patch
>
>
> e.g.  http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16766/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7284) UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym Query Expansion)

2016-05-17 Thread Daniel Bigham (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Bigham resolved LUCENE-7284.
---
Resolution: Fixed

> UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym 
> Query Expansion)
> -
>
> Key: LUCENE-7284
> URL: https://issues.apache.org/jira/browse/LUCENE-7284
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Daniel Bigham
>Assignee: Alan Woodward
>Priority: Blocker
> Attachments: LUCENE-7284.patch
>
>
> I am trying to support synonyms on the query side by doing 
> query expansion.
> For example, the query "open webpage" can be expanded if the following 
> things are synonyms:
> "open" | "go to"
> This becomes the following: (I'm using both the stop word filter and the 
> stemming filter)
> {code}
> spanNear(
>  [
>  spanOr([Title:open, Title:go]),
>  Title:webpag
>  ],
>  0,
>  true
> )
> {code}
> Notice that "go to" became just "go", because apparently "to" is removed 
> by the stop word filter.
> Interestingly, if you turn "go to webpage" into a phrase, you get "go ? 
> webpage", but if you turn "go to" into a phrase, you just get "go", 
> because apparently a trailing stop word in a PhraseQuery gets dropped. 
> (there would actually be no way to represent the gap currently because 
> it represents gaps implicitly via the position of the phrase tokens, and 
> if there is no second token, there's no way to implicitly indicate that 
> there is a gap there)
> The above query then fails to match "go to webpage", because "go to 
> webpage" in the index tokenizes as "go _ webpage", and the query, 
> because it lost its gap, tried to only match "go webpage".
> To try and work around that, I represent "go to" not as a phrase, but as 
> a SpanNearQuery, like this:
> {code}
> spanNear(
>  [
>  spanOr(
>  [
>  Title:open,
>  spanNear([Title:go, SpanGap(:1)], 0, true),
>  ]
>  ),
>  Title:webpag
>  ],
>  0,
>  true
> )
> {code}
> However, when I run that query, I get the following:
> {code}
> A Java exception occurred: java.lang.UnsupportedOperationException
>  at 
> org.apache.lucene.search.spans.SpanNearQuery$GapSpans.positionsCost(SpanNearQuery.java:398)
>  at 
> org.apache.lucene.search.spans.ConjunctionSpans.asTwoPhaseIterator(ConjunctionSpans.java:96)
>  at 
> org.apache.lucene.search.spans.NearSpansOrdered.asTwoPhaseIterator(NearSpansOrdered.java:45)
>  at 
> org.apache.lucene.search.spans.ScoringWrapperSpans.asTwoPhaseIterator(ScoringWrapperSpans.java:88)
>  at 
> org.apache.lucene.search.ConjunctionDISI.addSpans(ConjunctionDISI.java:104)
>  at 
> org.apache.lucene.search.ConjunctionDISI.intersectSpans(ConjunctionDISI.java:82)
>  at 
> org.apache.lucene.search.spans.ConjunctionSpans.(ConjunctionSpans.java:41)
>  at 
> org.apache.lucene.search.spans.NearSpansOrdered.(NearSpansOrdered.java:54)
>  at 
> org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:232)
>  at 
> org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:134)
>  at org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:38)
>  at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
> {code}
> ... and when I look up that GapSpans class in SpanNearQuery.java, I see:
> {code}
> @Override
> public float positionsCost() {
>throw new UnsupportedOperationException();
> }
> {code}
> I asked this question on the mailing list on May 14 and was directed to 
> submit a bug here.
> This issue is of relatively high priority for us, since this represents the 
> most promising technique we have for supporting synonyms on top of Lucene. 
> (since the SynonymFilter suffers serious issues wrt multi-word synonyms)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7284) UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym Query Expansion)

2016-05-17 Thread Daniel Bigham (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286657#comment-15286657
 ] 

Daniel Bigham commented on LUCENE-7284:
---

Confirmed the fix.  My synonym expansion strategy now appears to work as hoped. 
A big thank you to Alan!

> UnsupportedOperationException wrt SpanNearQuery with Gap (Needed for Synonym 
> Query Expansion)
> -
>
> Key: LUCENE-7284
> URL: https://issues.apache.org/jira/browse/LUCENE-7284
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Reporter: Daniel Bigham
>Assignee: Alan Woodward
>Priority: Blocker
> Attachments: LUCENE-7284.patch
>
>
> I am trying to support synonyms on the query side by doing 
> query expansion.
> For example, the query "open webpage" can be expanded if the following 
> things are synonyms:
> "open" | "go to"
> This becomes the following: (I'm using both the stop word filter and the 
> stemming filter)
> {code}
> spanNear(
>  [
>  spanOr([Title:open, Title:go]),
>  Title:webpag
>  ],
>  0,
>  true
> )
> {code}
> Notice that "go to" became just "go", because apparently "to" is removed 
> by the stop word filter.
> Interestingly, if you turn "go to webpage" into a phrase, you get "go ? 
> webpage", but if you turn "go to" into a phrase, you just get "go", 
> because apparently a trailing stop word in a PhraseQuery gets dropped. 
> (there would actually be no way to represent the gap currently because 
> it represents gaps implicitly via the position of the phrase tokens, and 
> if there is no second token, there's no way to implicitly indicate that 
> there is a gap there)
> The above query then fails to match "go to webpage", because "go to 
> webpage" in the index tokenizes as "go _ webpage", and the query, 
> because it lost its gap, tried to only match "go webpage".
> To try and work around that, I represent "go to" not as a phrase, but as 
> a SpanNearQuery, like this:
> {code}
> spanNear(
>  [
>  spanOr(
>  [
>  Title:open,
>  spanNear([Title:go, SpanGap(:1)], 0, true),
>  ]
>  ),
>  Title:webpag
>  ],
>  0,
>  true
> )
> {code}
> However, when I run that query, I get the following:
> {code}
> A Java exception occurred: java.lang.UnsupportedOperationException
>  at 
> org.apache.lucene.search.spans.SpanNearQuery$GapSpans.positionsCost(SpanNearQuery.java:398)
>  at 
> org.apache.lucene.search.spans.ConjunctionSpans.asTwoPhaseIterator(ConjunctionSpans.java:96)
>  at 
> org.apache.lucene.search.spans.NearSpansOrdered.asTwoPhaseIterator(NearSpansOrdered.java:45)
>  at 
> org.apache.lucene.search.spans.ScoringWrapperSpans.asTwoPhaseIterator(ScoringWrapperSpans.java:88)
>  at 
> org.apache.lucene.search.ConjunctionDISI.addSpans(ConjunctionDISI.java:104)
>  at 
> org.apache.lucene.search.ConjunctionDISI.intersectSpans(ConjunctionDISI.java:82)
>  at 
> org.apache.lucene.search.spans.ConjunctionSpans.(ConjunctionSpans.java:41)
>  at 
> org.apache.lucene.search.spans.NearSpansOrdered.(NearSpansOrdered.java:54)
>  at 
> org.apache.lucene.search.spans.SpanNearQuery$SpanNearWeight.getSpans(SpanNearQuery.java:232)
>  at 
> org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:134)
>  at org.apache.lucene.search.spans.SpanWeight.scorer(SpanWeight.java:38)
>  at org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
> {code}
> ... and when I look up that GapSpans class in SpanNearQuery.java, I see:
> {code}
> @Override
> public float positionsCost() {
>throw new UnsupportedOperationException();
> }
> {code}
> I asked this question on the mailing list on May 14 and was directed to 
> submit a bug here.
> This issue is of relatively high priority for us, since this represents the 
> most promising technique we have for supporting synonyms on top of Lucene. 
> (since the SynonymFilter suffers serious issues wrt multi-word synonyms)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9121) ant precommit fails on ant check-lib-versions

2016-05-17 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15286652#comment-15286652
 ] 

Christine Poerschke commented on SOLR-9121:
---

Hi Steve, for me the check-lib-versions task succeeds also with the passthrough 
patch.

Patch looks good to me, it's neat how the 
{{getProject().getProperty("ivysettings.xml")}} means that the 
{{default-nested-ivy-settings.xml}} file (or 
{{my-custom-nested-ivy-settings.xml}} with SOLR-9109) can be used without 
cluttering up the attributes passed to the LibVersionsCheckTask task itself.

> ant precommit fails on ant check-lib-versions
> -
>
> Key: SOLR-9121
> URL: https://issues.apache.org/jira/browse/SOLR-9121
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
> Attachments: SOLR-9121-passthrough.patch, SOLR-9121.patch
>
>
> e.g.  http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/16766/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >