[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_92) - Build # 1078 - Failure!

2016-07-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1078/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch

Error Message:
CollectionStateWatcher wasn't cleared after completion

Stack Trace:
java.lang.AssertionError: CollectionStateWatcher wasn't cleared after completion
at 
__randomizedtesting.SeedInfo.seed([942277EBEA12993F:C919B89BAD1F0601]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.cloud.TestCollectionStateWatchers.testSimpleCollectionWatch(TestCollectionStateWatchers.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 13245 lines...]
   [junit4] Suite: org.apache.solr.common.cloud.TestCollectionStateWatchers
   [junit4]   2> Creating dataDir: 

[jira] [Updated] (SOLR-9163) Confusing solrconfig.xml in the downloaded solr*.zip

2016-07-06 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-9163:
---
Attachment: SOLR-9163.patch

OK, full patch attached, essentially syncing the two configsets.
I'll commit tomorrow as there haven't been any concerns/objections over this 
issue before.

> Confusing solrconfig.xml in the downloaded solr*.zip
> 
>
> Key: SOLR-9163
> URL: https://issues.apache.org/jira/browse/SOLR-9163
> Project: Solr
>  Issue Type: Bug
>Reporter: Sachin Goyal
> Attachments: SOLR-9163.patch, SOLR-9163.patch
>
>
> Here are the solrconfig.xml when I download and unzip solr:
> {code}
> find . -name solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/db/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/mail/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/rss/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/solr/conf/solrconfig.xml
> ./solr-5.5.1/example/example-DIH/solr/tika/conf/solrconfig.xml
> ./solr-5.5.1/example/files/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/basic_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/data_driven_schema_configs/conf/solrconfig.xml
> ./solr-5.5.1/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
> {code}
> Most likely, the ones I want to use are in server/solr/configsets, I assume.
> But then which ones among those three?
> Searching online does not provide much detailed information.
> And diff-ing among them yields even more confusing results.
> Example: When I diff basic_configs/conf/solrconfig.xml with 
> data_driven_schema_configs/conf/solrconfig.xml, I am not sure why the latter 
> has these extra constrcuts?
> # solr.LimitTokenCountFilterFactory and all the comments around it.
> # deletionPolicy class="solr.SolrDeletionPolicy"
> # Commented out infoStream file="INFOSTREAM.txt"
> # Extra comments for "Update Related Event Listeners"
> # indexReaderFactory
> # And so for lots of other constructs and comments.
> The point is that it is difficult to find out exactly what extra features in 
> the latter are making it data-driven. Hence it is difficult to know what 
> features I am losing by not taking the data-driven-schema.
> It would be good to sync the above 3 files together (each file should have 
> same comments and differ only in the configuration which makes them 
> different). Also, some good documentation should be put online about them 
> otherwise it is very confusing for non-committers and vanilla-users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 699 - Still Failing!

2016-07-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/699/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.overseer.ZkStateWriterTest.testExternalModificationToStateFormat2

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([FBAC3EEEB4DB63A3:8A712A3AD5709F23]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.overseer.ZkStateWriterTest.testExternalModificationToStateFormat2(ZkStateWriterTest.java:328)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test

Error Message:
Error from server at http://127.0.0.1:37778/solr: 'location' is not specified 
as a query parameter or as a default repository property or as a cluster 
property.

Stack 

[jira] [Updated] (SOLR-9270) Let spatialContextFactory attribute accept "JTS" and the old value

2016-07-06 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-9270:
---
 Assignee: David Smiley
Fix Version/s: 6.2
  Component/s: spatial

> Let spatialContextFactory attribute accept "JTS" and the old value
> --
>
> Key: SOLR-9270
> URL: https://issues.apache.org/jira/browse/SOLR-9270
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.2
>
> Attachments: SOLR-9270.patch
>
>
> The spatialContextFactory attribute (sometimes set on RPT field) is 
> interpreted by a Spatial4j SpatialContextFactory and is expected to be a 
> class name.  In the Solr adapter, for ease of use, it would be nice to accept 
> simply "JTS".
> Furthermore the older value in 5x should be accepted with a logged warning.  
> That would make upgrading easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9270) Let spatialContextFactory attribute accept "JTS" and the old value

2016-07-06 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-9270:
---
Attachment: SOLR-9270.patch

This patch file adds a "JTS" alias, and it also rewrites any attribute value 
containing "com.spatial4j.core" with "org.locationtech.spatial4j".  The 
spatialContextFactory isn't the only attribute this applies to, there are some 
others: 
https://locationtech.github.io/spatial4j/apidocs/com/spatial4j/core/context/SpatialContextFactory.html

There is no test as I don't want to bring in a JTS dependency.  I tested 
manually, and observed the expected warnings when the old class name is 
referenced.

I'll commit this Friday.  I plan to retroactively include a note in the 
CHANGES.txt under 6.0 migrating mentioning the change in package name and that 
going to 6.2 may be easier due to this change.  I'll also update the ref guide.

> Let spatialContextFactory attribute accept "JTS" and the old value
> --
>
> Key: SOLR-9270
> URL: https://issues.apache.org/jira/browse/SOLR-9270
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
> Attachments: SOLR-9270.patch
>
>
> The spatialContextFactory attribute (sometimes set on RPT field) is 
> interpreted by a Spatial4j SpatialContextFactory and is expected to be a 
> class name.  In the Solr adapter, for ease of use, it would be nice to accept 
> simply "JTS".
> Furthermore the older value in 5x should be accepted with a logged warning.  
> That would make upgrading easier.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9277) Clean up some more remnants of supporting old and new style solr.xml in tests

2016-07-06 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-9277.
--
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.2

> Clean up some more remnants of supporting old and new style solr.xml in tests
> -
>
> Key: SOLR-9277
> URL: https://issues.apache.org/jira/browse/SOLR-9277
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 6.2, master (7.0)
>
> Attachments: beast-9277
>
>
> I have reason to look at the tests and I'm seeing a few remnants of old/new 
> style Solr.xml support (with and without ). So far:
> > SolrTestCaseJ4.copySolrHomeToTemp with a flag whether old or new style.
> > solr-no-core.xml in test files. Mostly this is identical to solr.xml, here 
> > are the differences:
> in solr-no-core.xml but not solr.xml
>  name="autoReplicaFailoverWaitAfterExpiration">${autoReplicaFailoverWaitAfterExpiration:1}
>  name="autoReplicaFailoverWorkLoopDelay">${autoReplicaFailoverWorkLoopDelay:1}
>  name="autoReplicaFailoverBadNodeExpiration">${autoReplicaFailoverBadNodeExpiration:6}
> in solr.xml but not in solr-no-cores.xml:
> ${leaderVoteWait:1}
> The question here is whether moving the three properties in solr-no-cores.xml 
> to solr.xml  and using solr.xml in all the tests that currently use 
> solr-no-cores.xml would mess up tests and whether leaderVoteWait being in 
> solr.xml would mess up tests currently using solr-no-cores.xml.
> I'll make a quick hack at this to see and we can discuss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9277) Clean up some more remnants of supporting old and new style solr.xml in tests

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365572#comment-15365572
 ] 

ASF subversion and git services commented on SOLR-9277:
---

Commit 602a72ddade76931b90d59bd03365666c2835223 in lucene-solr's branch 
refs/heads/branch_6x from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=602a72d ]

SOLR-9277: Clean up some more remnants of supporting old and new style solr.xml 
in tests
(cherry picked from commit 7743718)


> Clean up some more remnants of supporting old and new style solr.xml in tests
> -
>
> Key: SOLR-9277
> URL: https://issues.apache.org/jira/browse/SOLR-9277
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: beast-9277
>
>
> I have reason to look at the tests and I'm seeing a few remnants of old/new 
> style Solr.xml support (with and without ). So far:
> > SolrTestCaseJ4.copySolrHomeToTemp with a flag whether old or new style.
> > solr-no-core.xml in test files. Mostly this is identical to solr.xml, here 
> > are the differences:
> in solr-no-core.xml but not solr.xml
>  name="autoReplicaFailoverWaitAfterExpiration">${autoReplicaFailoverWaitAfterExpiration:1}
>  name="autoReplicaFailoverWorkLoopDelay">${autoReplicaFailoverWorkLoopDelay:1}
>  name="autoReplicaFailoverBadNodeExpiration">${autoReplicaFailoverBadNodeExpiration:6}
> in solr.xml but not in solr-no-cores.xml:
> ${leaderVoteWait:1}
> The question here is whether moving the three properties in solr-no-cores.xml 
> to solr.xml  and using solr.xml in all the tests that currently use 
> solr-no-cores.xml would mess up tests and whether leaderVoteWait being in 
> solr.xml would mess up tests currently using solr-no-cores.xml.
> I'll make a quick hack at this to see and we can discuss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9277) Clean up some more remnants of supporting old and new style solr.xml in tests

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365570#comment-15365570
 ] 

ASF subversion and git services commented on SOLR-9277:
---

Commit 7743718d2982c7360911dddb2b4723cb52b58925 in lucene-solr's branch 
refs/heads/master from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7743718 ]

SOLR-9277: Clean up some more remnants of supporting old and new style solr.xml 
in tests


> Clean up some more remnants of supporting old and new style solr.xml in tests
> -
>
> Key: SOLR-9277
> URL: https://issues.apache.org/jira/browse/SOLR-9277
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: beast-9277
>
>
> I have reason to look at the tests and I'm seeing a few remnants of old/new 
> style Solr.xml support (with and without ). So far:
> > SolrTestCaseJ4.copySolrHomeToTemp with a flag whether old or new style.
> > solr-no-core.xml in test files. Mostly this is identical to solr.xml, here 
> > are the differences:
> in solr-no-core.xml but not solr.xml
>  name="autoReplicaFailoverWaitAfterExpiration">${autoReplicaFailoverWaitAfterExpiration:1}
>  name="autoReplicaFailoverWorkLoopDelay">${autoReplicaFailoverWorkLoopDelay:1}
>  name="autoReplicaFailoverBadNodeExpiration">${autoReplicaFailoverBadNodeExpiration:6}
> in solr.xml but not in solr-no-cores.xml:
> ${leaderVoteWait:1}
> The question here is whether moving the three properties in solr-no-cores.xml 
> to solr.xml  and using solr.xml in all the tests that currently use 
> solr-no-cores.xml would mess up tests and whether leaderVoteWait being in 
> solr.xml would mess up tests currently using solr-no-cores.xml.
> I'll make a quick hack at this to see and we can discuss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9277) Clean up some more remnants of supporting old and new style solr.xml in tests

2016-07-06 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-9277:
-
Attachment: beast-9277

Patch with CHANGES.txt entry

> Clean up some more remnants of supporting old and new style solr.xml in tests
> -
>
> Key: SOLR-9277
> URL: https://issues.apache.org/jira/browse/SOLR-9277
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: beast-9277
>
>
> I have reason to look at the tests and I'm seeing a few remnants of old/new 
> style Solr.xml support (with and without ). So far:
> > SolrTestCaseJ4.copySolrHomeToTemp with a flag whether old or new style.
> > solr-no-core.xml in test files. Mostly this is identical to solr.xml, here 
> > are the differences:
> in solr-no-core.xml but not solr.xml
>  name="autoReplicaFailoverWaitAfterExpiration">${autoReplicaFailoverWaitAfterExpiration:1}
>  name="autoReplicaFailoverWorkLoopDelay">${autoReplicaFailoverWorkLoopDelay:1}
>  name="autoReplicaFailoverBadNodeExpiration">${autoReplicaFailoverBadNodeExpiration:6}
> in solr.xml but not in solr-no-cores.xml:
> ${leaderVoteWait:1}
> The question here is whether moving the three properties in solr-no-cores.xml 
> to solr.xml  and using solr.xml in all the tests that currently use 
> solr-no-cores.xml would mess up tests and whether leaderVoteWait being in 
> solr.xml would mess up tests currently using solr-no-cores.xml.
> I'll make a quick hack at this to see and we can discuss.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #:

2016-07-06 Thread dsmiley
Github user dsmiley commented on the pull request:


https://github.com/apache/lucene-solr/commit/218b986ca0fa395b67817c858b6020160c8b5a7b#commitcomment-18151959
  
I like this much better Doug.  I suggest aligning this new class a bit 
closer to `MultiBoolValues` since it's actually very similar except that it has 
exactly 2 ValueSources it operates on, not variable.  Perhaps name 
"BiBoolValues" with the same rationale as why the JDK has BiPredicate as 
opposed to just Predicate?  Along with the class name change the other is to 
change the abstract method from `compare(double,double)` to `protected abstract 
boolean func(int doc, FunctionValues lhs, FunctionValues rhs);`
-- same method name as MultiBoolValues, and note it takes FunctionValues 
thus allowing a subclass to perhaps pick the long value or some other if it 
wants.

Also, equals & hashcode may need to be updated now?  Equals should be true 
if both instances have the same class (class.equals) and lhs & rhs.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1262 - Still Failing

2016-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1262/

1 tests failed.
FAILED:  org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test

Error Message:
Error from server at http://127.0.0.1:58461/solr: 'location' is not specified 
as a query parameter or as a default repository property or as a cluster 
property.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:58461/solr: 'location' is not specified as a 
query parameter or as a default repository property or as a cluster property.
at 
__randomizedtesting.SeedInfo.seed([A3D55222F5BFB121:2B816DF85B43DCD9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:366)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1228)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testInvalidPath(AbstractCloudBackupRestoreTestCase.java:149)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS] Lucene-Solr-Tests-6.x - Build # 320 - Still Failing

2016-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/320/

1 tests failed.
FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testExternalCollectionWatchedNotWatched

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([426E97925D41014C:49D566BE0C18F7E5]:0)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testExternalCollectionWatchedNotWatched(ZkStateReaderTest.java:167)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11416 lines...]
   [junit4] Suite: org.apache.solr.cloud.overseer.ZkStateReaderTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/test/J2/temp/solr.cloud.overseer.ZkStateReaderTest_426E97925D41014C-001/init-core-data-001
   [junit4]   2> 896946 INFO  
(SUITE-ZkStateReaderTest-seed#[426E97925D41014C]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and 

[JENKINS-EA] Lucene-Solr-master-Linux (32bit/jdk-9-ea+125) - Build # 17171 - Failure!

2016-07-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/17171/
Java: 32bit/jdk-9-ea+125 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.update.AutoCommitTest.testMaxTime

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([9D81E2E6CA462F42:7759F0454DCB37E]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:782)
at 
org.apache.solr.update.AutoCommitTest.testMaxTime(AutoCommitTest.java:245)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:533)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(java.base@9-ea/Thread.java:843)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//result[@numFound=0]
xml response was: 

00530530530530what's 
inside?info1539150352731865088muLti-Default422016-07-06T23:36:00.488Z


request was:q=id:530=standard=0=20=2.2
at 

[JENKINS] Lucene-Solr-6.x-Windows (64bit/jdk1.8.0_92) - Build # 300 - Still Failing!

2016-07-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/300/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy

Error Message:
Could not find collection : c1

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : c1
at 
__randomizedtesting.SeedInfo.seed([A43467B507B06DCD:CF7BC7C87EBFB0F7]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:192)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdate(ZkStateReaderTest.java:129)
at 
org.apache.solr.cloud.overseer.ZkStateReaderTest.testStateFormatUpdateWithExplicitRefreshLazy(ZkStateReaderTest.java:48)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 

[jira] [Updated] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-9193:
-
Component/s: SolrJ

> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9193.
--
Resolution: Resolved

> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9243) Add terms.list parameter to the TermsComponent to fetch the docFreq for a list of terms

2016-07-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9243.
--
Resolution: Implemented

> Add terms.list parameter to the TermsComponent to fetch the docFreq for a 
> list of terms
> ---
>
> Key: SOLR-9243
> URL: https://issues.apache.org/jira/browse/SOLR-9243
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9243.patch, SOLR-9243.patch, SOLR-9243.patch, 
> SOLR-9243.patch, SOLR-9243.patch
>
>
> This ticket will add a terms.list parameter to the TermsComponent to retrieve 
> Terms and docFreq for a specific list of Terms.
> This is needed to support SOLR-9193 which needs to fetch the docFreq for a 
> list of Terms.
> This should also be useful as a general tool for fetching docFreq given a 
> list of Terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9243) Add terms.list parameter to the TermsComponent to fetch the docFreq for a list of terms

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365325#comment-15365325
 ] 

ASF subversion and git services commented on SOLR-9243:
---

Commit de7a3f6f6842af8b211baa4a0291c967932297c1 in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=de7a3f6 ]

SOLR-9193,SOLR-9243: update CHANGES.txt


> Add terms.list parameter to the TermsComponent to fetch the docFreq for a 
> list of terms
> ---
>
> Key: SOLR-9243
> URL: https://issues.apache.org/jira/browse/SOLR-9243
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9243.patch, SOLR-9243.patch, SOLR-9243.patch, 
> SOLR-9243.patch, SOLR-9243.patch
>
>
> This ticket will add a terms.list parameter to the TermsComponent to retrieve 
> Terms and docFreq for a specific list of Terms.
> This is needed to support SOLR-9193 which needs to fetch the docFreq for a 
> list of Terms.
> This should also be useful as a general tool for fetching docFreq given a 
> list of Terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365324#comment-15365324
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit de7a3f6f6842af8b211baa4a0291c967932297c1 in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=de7a3f6 ]

SOLR-9193,SOLR-9243: update CHANGES.txt


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365323#comment-15365323
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit a86f25ea0c3cb7e1f628d93cfbc4c7b73dbb92a8 in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a86f25e ]

SOLR-9193: Fix-up javdoc


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365315#comment-15365315
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit 2bd6c4ecd774a818168b37e6f09208f8ee4ec45f in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2bd6c4e ]

SOLR-9193,SOLR-9243: update CHANGES.txt


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365314#comment-15365314
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit d9a0eba1a3551b722a700d0fe973ce657b1ce6d8 in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d9a0eba ]

SOLR-9193: Fix-up javdoc


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9243) Add terms.list parameter to the TermsComponent to fetch the docFreq for a list of terms

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365316#comment-15365316
 ] 

ASF subversion and git services commented on SOLR-9243:
---

Commit 2bd6c4ecd774a818168b37e6f09208f8ee4ec45f in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2bd6c4e ]

SOLR-9193,SOLR-9243: update CHANGES.txt


> Add terms.list parameter to the TermsComponent to fetch the docFreq for a 
> list of terms
> ---
>
> Key: SOLR-9243
> URL: https://issues.apache.org/jira/browse/SOLR-9243
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9243.patch, SOLR-9243.patch, SOLR-9243.patch, 
> SOLR-9243.patch, SOLR-9243.patch
>
>
> This ticket will add a terms.list parameter to the TermsComponent to retrieve 
> Terms and docFreq for a specific list of Terms.
> This is needed to support SOLR-9193 which needs to fetch the docFreq for a 
> list of Terms.
> This should also be useful as a general tool for fetching docFreq given a 
> list of Terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #:

2016-07-06 Thread maedhroz
Github user maedhroz commented on the pull request:


https://github.com/apache/lucene-solr/commit/af07ee65186489206ac2013017463c4314d09912#commitcomment-18150391
  
In solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java:
In solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java on line 
776:
Absolutely. I made one more pass...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 536 - Failure

2016-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/536/

No tests ran.

Build Log:
[...truncated 40562 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (10.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.0.0-src.tgz...
   [smoker] 29.8 MB in 0.03 sec (869.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.tgz...
   [smoker] 64.3 MB in 0.07 sec (877.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.0.0.zip...
   [smoker] 74.9 MB in 0.09 sec (877.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6022 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6022 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 224 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (33.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.0.0-src.tgz...
   [smoker] 39.1 MB in 0.05 sec (755.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.tgz...
   [smoker] 137.2 MB in 0.19 sec (719.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.0.0.zip...
   [smoker] 145.9 MB in 0.18 sec (833.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-7.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]  

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_92) - Build # 5963 - Still Failing!

2016-07-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5963/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseG1GC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CdcrVersionReplicationTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [InternalHttpClient]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [InternalHttpClient]
at __randomizedtesting.SeedInfo.seed([4C97FC84A55A730E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257)
at sun.reflect.GeneratedMethodAccessor21.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=3581, 
name=SocketProxy-Response-65110:65476, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=3581, name=SocketProxy-Response-65110:65476, 
state=RUNNABLE, group=TGRP-HttpPartitionTest]
Caused by: java.lang.RuntimeException: java.net.SocketException: Socket is 
closed
at __randomizedtesting.SeedInfo.seed([4C97FC84A55A730E]:0)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:347)
Caused by: java.net.SocketException: Socket is closed
at java.net.Socket.setSoTimeout(Socket.java:1137)
at 
org.apache.solr.cloud.SocketProxy$Bridge$Pump.run(SocketProxy.java:344)


FAILED:  org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test

Error Message:
Error from server at http://127.0.0.1:53430/solr: The backup directory already 
exists: 
file:///C:/Users/jenkins/workspace/Lucene-Solr-master-Windows/solr/build/solr-core/test/J0/temp/solr.cloud.TestLocalFSCloudBackupRestore_4C97FC84A55A730E-001/tempDir-002/mytestbackup/

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:53430/solr: The backup directory already 
exists: 
file:///C:/Users/jenkins/workspace/Lucene-Solr-master-Windows/solr/build/solr-core/test/J0/temp/solr.cloud.TestLocalFSCloudBackupRestore_4C97FC84A55A730E-001/tempDir-002/mytestbackup/
at 
__randomizedtesting.SeedInfo.seed([4C97FC84A55A730E:C4C3C35E0BA61EF6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 

Re: Lucene Block term Dictionary

2016-07-06 Thread Michael McCandless
The latest terms dictionary is "block tree", and unfortunately there are no
guides here, besides of course the source code
(BlockTreeTermsWriter/Reader).  See especially the comments in those
sources: they point to a paper describing the inspiration for this
implementation.

The high level view is that this terms dictionary breaks up the sorted
terms into variable sized blocks (25 to 48 terms in each block) at "good"
boundaries, where the term prefixes change, to maximize overall compression.

The in-memory (JVM heap) FST terms index is used to find which on-disk
block may have a given term, and so on lookup of a given term, we walk the
FST, and then seek to that block and scan.

Mike McCandless

http://blog.mikemccandless.com

On Wed, Jul 6, 2016 at 12:04 PM, Mohit Sidana  wrote:

> Hello,
>
> I am interested to learn more about how Lucene uses block tree term
> dictionary.
>
> while doing research on this topic i found some useful information listed
> on below links.
>
>
> 1.
> http://blog.mikemccandless.com/2014/05/choosing-fast-unique-identifier-uuid.html
> 2.
> http://blog.mikemccandless.com/2013/09/lucene-now-has-in-memory-terms.html
> 3. http://www.slideshare.net/lucenerevolution/what-is-inaluceneagrandfinal
>
>
> I do understand that Lucene uses  to store Prefixes of terms in to
> memory and lookup terms/posting on disk but i am unable to visualize how
> actual search working in Lucene 6.0.
>
> Please can someone suggest a guide which i can follow to understand all
> step by step operation how actually a term search works with blockterms
> dictionary?
>
> Thanks.
>


[jira] [Issue Comment Deleted] (SOLR-7903) Add the FacetStream to the Streaming API and Wire It Into the SQLHandler

2016-07-06 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-7903:
-
Comment: was deleted

(was: Commit db295440a6a9aa0d43a2611c81331feda50a5834 in lucene-solr's branch 
refs/heads/master from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=db29544 ]

SOLR-7903: Comment out trappy references to example docs in elevate.xml files
)

> Add the FacetStream to the Streaming API and Wire It Into the SQLHandler
> 
>
> Key: SOLR-7903
> URL: https://issues.apache.org/jira/browse/SOLR-7903
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
> Attachments: SOLR-7093.patch, SOLR-7093.patch, SOLR-7093.patch, 
> SOLR-7093.patch, SOLR-7903.patch, SOLR-7903.patch, SOLR-7903.patch, 
> SOLR-7903.patch, SOLR-7903.patch
>
>
> This ticket adds the FacetStream class to the Streaming API and wires it into 
> the SQLHandler. The FacetStream will abstract the results from the JSON Facet 
> API as a Stream of Tuples. This will provide an alternative to the 
> RollupStream which uses Map/Reduce for aggregations.
> This ticket will also wire the FacetStream into the SQL interface, allowing 
> users to switch between the RollupStream (Map/Reduce) and the FacetStream 
> (JSON Facet API) as the underlying engine for SQL Group By aggregates.  SQL 
> clients can switch between Facets and Map Reduce with the new 
> *aggregationMode* http param.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-7903) Add the FacetStream to the Streaming API and Wire It Into the SQLHandler

2016-07-06 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened SOLR-7903:
--

Re-opening to remove mis-typed push messages

> Add the FacetStream to the Streaming API and Wire It Into the SQLHandler
> 
>
> Key: SOLR-7903
> URL: https://issues.apache.org/jira/browse/SOLR-7903
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
> Attachments: SOLR-7093.patch, SOLR-7093.patch, SOLR-7093.patch, 
> SOLR-7093.patch, SOLR-7903.patch, SOLR-7903.patch, SOLR-7903.patch, 
> SOLR-7903.patch, SOLR-7903.patch
>
>
> This ticket adds the FacetStream class to the Streaming API and wires it into 
> the SQLHandler. The FacetStream will abstract the results from the JSON Facet 
> API as a Stream of Tuples. This will provide an alternative to the 
> RollupStream which uses Map/Reduce for aggregations.
> This ticket will also wire the FacetStream into the SQL interface, allowing 
> users to switch between the RollupStream (Map/Reduce) and the FacetStream 
> (JSON Facet API) as the underlying engine for SQL Group By aggregates.  SQL 
> clients can switch between Facets and Map Reduce with the new 
> *aggregationMode* http param.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7903) Add the FacetStream to the Streaming API and Wire It Into the SQLHandler

2016-07-06 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-7903.
--
Resolution: Fixed

Sorry for the noise

> Add the FacetStream to the Streaming API and Wire It Into the SQLHandler
> 
>
> Key: SOLR-7903
> URL: https://issues.apache.org/jira/browse/SOLR-7903
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
> Attachments: SOLR-7093.patch, SOLR-7093.patch, SOLR-7093.patch, 
> SOLR-7093.patch, SOLR-7903.patch, SOLR-7903.patch, SOLR-7903.patch, 
> SOLR-7903.patch, SOLR-7903.patch
>
>
> This ticket adds the FacetStream class to the Streaming API and wires it into 
> the SQLHandler. The FacetStream will abstract the results from the JSON Facet 
> API as a Stream of Tuples. This will provide an alternative to the 
> RollupStream which uses Map/Reduce for aggregations.
> This ticket will also wire the FacetStream into the SQL interface, allowing 
> users to switch between the RollupStream (Map/Reduce) and the FacetStream 
> (JSON Facet API) as the underlying engine for SQL Group By aggregates.  SQL 
> clients can switch between Facets and Map Reduce with the new 
> *aggregationMode* http param.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-7903) Add the FacetStream to the Streaming API and Wire It Into the SQLHandler

2016-07-06 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-7903:
-
Comment: was deleted

(was: Commit 6a278333f2836d47c189ac95d2af9d465f22c676 in lucene-solr's branch 
refs/heads/branch_6x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6a27833 ]

SOLR-7903: Comment out trappy references to example docs in elevate.xml files
(cherry picked from commit db29544)
)

> Add the FacetStream to the Streaming API and Wire It Into the SQLHandler
> 
>
> Key: SOLR-7903
> URL: https://issues.apache.org/jira/browse/SOLR-7903
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
> Attachments: SOLR-7093.patch, SOLR-7093.patch, SOLR-7093.patch, 
> SOLR-7093.patch, SOLR-7903.patch, SOLR-7903.patch, SOLR-7903.patch, 
> SOLR-7903.patch, SOLR-7903.patch
>
>
> This ticket adds the FacetStream class to the Streaming API and wires it into 
> the SQLHandler. The FacetStream will abstract the results from the JSON Facet 
> API as a Stream of Tuples. This will provide an alternative to the 
> RollupStream which uses Map/Reduce for aggregations.
> This ticket will also wire the FacetStream into the SQL interface, allowing 
> users to switch between the RollupStream (Map/Reduce) and the FacetStream 
> (JSON Facet API) as the underlying engine for SQL Group By aggregates.  SQL 
> clients can switch between Facets and Map Reduce with the new 
> *aggregationMode* http param.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9273) Share and reuse config set in a node

2016-07-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365207#comment-15365207
 ] 

Tomás Fernández Löbbe commented on SOLR-9273:
-

bq. If you want something like that then it is probably better to use config 
templates via SOLR-7742 and SOLR-5955. I don't think such a change belongs to 
this issue.
I always think of the config overlay as the collection-specific overrides of a 
ConfigSet, editable via collections API, but yes, now I remember the discussion 
in SOLR-7570, and it looks like there is more work to do, since people may be 
relying on the fact that a change in the overlay changes all collections using 
the ConfigSet.

> Share and reuse config set in a node
> 
>
> Key: SOLR-9273
> URL: https://issues.apache.org/jira/browse/SOLR-9273
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis, SolrCloud
>Reporter: Shalin Shekhar Mangar
> Fix For: 6.2, master (7.0)
>
>
> Currently, each core in a node ends up creating a completely new instance of 
> ConfigSet with its own schema, solrconfig and other properties. This is 
> wasteful when you have a lot of replicas in the same node with many of them 
> referring to the same config set in Zookeeper.
> There are many issues that need to be addressed for this to work so this is a 
> parent issue to track the work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7930) Comment out trappy references to example docs in elevate.xml files

2016-07-06 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7930?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-7930.
--
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.2

BAH. I mis-typed the JIRA. The hashes for this checkin were:

master: db295440a6a9aa0d43a2611c81331feda50a5834
6x: 6a278333f2836d47c189ac95d2af9d465f22c676

> Comment out trappy references to example docs in elevate.xml files
> --
>
> Key: SOLR-7930
> URL: https://issues.apache.org/jira/browse/SOLR-7930
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.3, 6.0
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-7930.patch
>
>
> What do people think about this? QEV, especially with the default example is 
> trappy when someone defines the  as something other than a string.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7903) Add the FacetStream to the Streaming API and Wire It Into the SQLHandler

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365194#comment-15365194
 ] 

ASF subversion and git services commented on SOLR-7903:
---

Commit 6a278333f2836d47c189ac95d2af9d465f22c676 in lucene-solr's branch 
refs/heads/branch_6x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6a27833 ]

SOLR-7903: Comment out trappy references to example docs in elevate.xml files
(cherry picked from commit db29544)


> Add the FacetStream to the Streaming API and Wire It Into the SQLHandler
> 
>
> Key: SOLR-7903
> URL: https://issues.apache.org/jira/browse/SOLR-7903
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
> Attachments: SOLR-7093.patch, SOLR-7093.patch, SOLR-7093.patch, 
> SOLR-7093.patch, SOLR-7903.patch, SOLR-7903.patch, SOLR-7903.patch, 
> SOLR-7903.patch, SOLR-7903.patch
>
>
> This ticket adds the FacetStream class to the Streaming API and wires it into 
> the SQLHandler. The FacetStream will abstract the results from the JSON Facet 
> API as a Stream of Tuples. This will provide an alternative to the 
> RollupStream which uses Map/Reduce for aggregations.
> This ticket will also wire the FacetStream into the SQL interface, allowing 
> users to switch between the RollupStream (Map/Reduce) and the FacetStream 
> (JSON Facet API) as the underlying engine for SQL Group By aggregates.  SQL 
> clients can switch between Facets and Map Reduce with the new 
> *aggregationMode* http param.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7903) Add the FacetStream to the Streaming API and Wire It Into the SQLHandler

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365176#comment-15365176
 ] 

ASF subversion and git services commented on SOLR-7903:
---

Commit db295440a6a9aa0d43a2611c81331feda50a5834 in lucene-solr's branch 
refs/heads/master from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=db29544 ]

SOLR-7903: Comment out trappy references to example docs in elevate.xml files


> Add the FacetStream to the Streaming API and Wire It Into the SQLHandler
> 
>
> Key: SOLR-7903
> URL: https://issues.apache.org/jira/browse/SOLR-7903
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.0
>
> Attachments: SOLR-7093.patch, SOLR-7093.patch, SOLR-7093.patch, 
> SOLR-7093.patch, SOLR-7903.patch, SOLR-7903.patch, SOLR-7903.patch, 
> SOLR-7903.patch, SOLR-7903.patch
>
>
> This ticket adds the FacetStream class to the Streaming API and wires it into 
> the SQLHandler. The FacetStream will abstract the results from the JSON Facet 
> API as a Stream of Tuples. This will provide an alternative to the 
> RollupStream which uses Map/Reduce for aggregations.
> This ticket will also wire the FacetStream into the SQL interface, allowing 
> users to switch between the RollupStream (Map/Reduce) and the FacetStream 
> (JSON Facet API) as the underlying engine for SQL Group By aggregates.  SQL 
> clients can switch between Facets and Map Reduce with the new 
> *aggregationMode* http param.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Re: potential accuracy degradation due to approximation of document length in BM25 (and other similarities)

2016-07-06 Thread David Smiley
Leo,
There may be confusion here as to where the space is wasted.  1 vs 8 bytes
per doc on disk is peanuts, sure, but in RAM it is not and that is the
concern.  AFAIK the norms are memory-mapped in, and we need to ensure it's
trivial to know which offset to go to on disk based on a document id, which
precludes compression but maybe you have ideas to improve that.

To use your own norms encoding, see Codec.normsFormat.  (disclaimer: I
haven't used this but I know where to look)

~ David

On Wed, Jul 6, 2016 at 5:31 PM Leo Boytsov  wrote:

> Hi,
>
> for some reason I didn't get a reply from the mailing list directly, so I
> have to send a new message. I appreciate if something can be fixed, so that
> I get a reply as well.
>
> First of all, I don't buy the claim about the issue being well-known. I
> would actually argue that nobody except a few Lucene devs know about it.
> There is also a bug in Lucene's tutorial example. This needs to be fixed as
> well.
>
> Neither do I find your arguments convincing. In particular, I don't think
> that there is any serious waste of space. Please, see my detailed comments
> below. Please, note that I definitely don't know all the internals well, so
> I appreciate if you could explain them better.
>
> The downsides are documented and known. But I don't think you are
>> fully documenting the tradeoffs here, by encoding up to a 64-bit long,
>> you can use up to *8x more memory and disk space* than what lucene
>> does by default, and that is per-field.
>
>
> This is not true. First of all, the increase is only for the textual
> fields. Simple fields like Integers don't use normalization factors. So,
> there is no increase for them.
>
> In the worst case, you will have 7 extra bytes for a *text* field.
> However, this is not an 8x increase.
>
> If you do *compress* the length of the text field, then its size will
> depend on the size of the text field. For example, one extra byte will be
> required for fields that contain
> more than 256 words, two extra bytes for fields having more than 65536
> words, and so on so forth. *Compared to the field sizes, a several byte*
> increase is simply *laughable*.
>
> If Lucene saves normalization factor *without compression, *it should now
> use 8 bytes already. So, storing the full document length won't make a
> difference.
>
>
>> So that is a real trap. Maybe
>> throw an exception there instead if the boost != 1F (just don't
>> support it), and add a guard for "supermassive" documents, so that at
>> most only 16 bits are ever used instead of 64. The document need not
>> really be massive, it can happen just from a strange analysis chain
>> (n-grams etc) that you get large values here.
>>
>
> As mentioned above, storing a few extra bytes for supermassive documents
> doesn't affect the overall storage by more than a tiny fraction of a
> percent.
>
>
>>
>> I have run comparisons in the past on standard collections to see what
>> happens with this "feature"  and differences were very small. I think
>> in practice people do far more damage by sharding their document
>> collections but not using a distributed interchange of IDF, causing
>> results from different shards to be incomparable :)
>>
>
> Ok, this is not what I see on my data. I see* more than* a 10%
> degradation. This is not peanuts. Do we want to re-run experiments on
> standard collections? Don't forget that Lucene is now used as a baseline to
> compare against. People claim to beat BM25 while they beat something
> inferior.
>
>
>>
>> As far as the bm25 tableization, that is *not* the justification for
>> using an 8 byte encoding. The 8 byte encoding was already there long
>> ago (to save memory/space) when bm25 was added, that small
>> optimization just takes advantage of it. The optimization exists just
>> so that bm25 will have comparable performance to ClassicSimilarity.
>>
>
> Sorry, I don't understand this comment. What kind of 8-byte encoding are
> you talking about? Do you mean a single-byte encoding? This is what the
> current BM25 similarity seems to use.
>
> I also don't quite understand what is a justification for what, please,
> clarify.
>
>
>>
>> Either way, the document's length can be stored with more accuracy,
>> without wasting space, especially if you don't use index-time
>> boosting. But the default encoding supports features like that because
>> lucene supports all these features.
>>
>
> Sorry, I don't get this again. Which features should Lucene support? If
> you like to use boosting in exactly the same way you used it before (though
> I won't recommend doing so), you can do this. In fact, my implementation
> tries to mimic this as much as possible. If you mean something else,
> please, clarify.
>
> Also, how does one save document length with more accuracy? Is there a
> special API or something?
>
> Thank you!
>
>
>>
>>
>> On Mon, Jul 4, 2016 at 1:53 AM, Leo Boytsov  wrote:
>> > Hi everybody,
>> >
>> > Some time ago, I had 

[GitHub] lucene-solr pull request #:

2016-07-06 Thread dsmiley
Github user dsmiley commented on the pull request:


https://github.com/apache/lucene-solr/commit/af07ee65186489206ac2013017463c4314d09912#commitcomment-18149367
  
In solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java:
In solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java on line 
776:
Couldn't fields be null here?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1261 - Still Failing

2016-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1261/

2 tests failed.
FAILED:  org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test

Error Message:
Error from server at https://127.0.0.1:41179/solr: 'location' is not specified 
as a query parameter or as a default repository property or as a cluster 
property.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:41179/solr: 'location' is not specified as a 
query parameter or as a default repository property or as a cluster property.
at 
__randomizedtesting.SeedInfo.seed([DF2F8BC5AF116CAC:577BB41F01ED0154]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:366)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1228)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testInvalidPath(AbstractCloudBackupRestoreTestCase.java:149)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

Re: Re: potential accuracy degradation due to approximation of document length in BM25 (and other similarities)

2016-07-06 Thread Leo Boytsov
Hi,

for some reason I didn't get a reply from the mailing list directly, so I
have to send a new message. I appreciate if something can be fixed, so that
I get a reply as well.

First of all, I don't buy the claim about the issue being well-known. I
would actually argue that nobody except a few Lucene devs know about it.
There is also a bug in Lucene's tutorial example. This needs to be fixed as
well.

Neither do I find your arguments convincing. In particular, I don't think
that there is any serious waste of space. Please, see my detailed comments
below. Please, note that I definitely don't know all the internals well, so
I appreciate if you could explain them better.

The downsides are documented and known. But I don't think you are
> fully documenting the tradeoffs here, by encoding up to a 64-bit long,
> you can use up to *8x more memory and disk space* than what lucene
> does by default, and that is per-field.


This is not true. First of all, the increase is only for the textual
fields. Simple fields like Integers don't use normalization factors. So,
there is no increase for them.

In the worst case, you will have 7 extra bytes for a *text* field. However,
this is not an 8x increase.

If you do *compress* the length of the text field, then its size will
depend on the size of the text field. For example, one extra byte will be
required for fields that contain
more than 256 words, two extra bytes for fields having more than 65536
words, and so on so forth. *Compared to the field sizes, a several byte*
increase is simply *laughable*.

If Lucene saves normalization factor *without compression, *it should now
use 8 bytes already. So, storing the full document length won't make a
difference.


> So that is a real trap. Maybe
> throw an exception there instead if the boost != 1F (just don't
> support it), and add a guard for "supermassive" documents, so that at
> most only 16 bits are ever used instead of 64. The document need not
> really be massive, it can happen just from a strange analysis chain
> (n-grams etc) that you get large values here.
>

As mentioned above, storing a few extra bytes for supermassive documents
doesn't affect the overall storage by more than a tiny fraction of a
percent.


>
> I have run comparisons in the past on standard collections to see what
> happens with this "feature"  and differences were very small. I think
> in practice people do far more damage by sharding their document
> collections but not using a distributed interchange of IDF, causing
> results from different shards to be incomparable :)
>

Ok, this is not what I see on my data. I see* more than* a 10% degradation.
This is not peanuts. Do we want to re-run experiments on standard
collections? Don't forget that Lucene is now used as a baseline to compare
against. People claim to beat BM25 while they beat something inferior.


>
> As far as the bm25 tableization, that is *not* the justification for
> using an 8 byte encoding. The 8 byte encoding was already there long
> ago (to save memory/space) when bm25 was added, that small
> optimization just takes advantage of it. The optimization exists just
> so that bm25 will have comparable performance to ClassicSimilarity.
>

Sorry, I don't understand this comment. What kind of 8-byte encoding are
you talking about? Do you mean a single-byte encoding? This is what the
current BM25 similarity seems to use.

I also don't quite understand what is a justification for what, please,
clarify.


>
> Either way, the document's length can be stored with more accuracy,
> without wasting space, especially if you don't use index-time
> boosting. But the default encoding supports features like that because
> lucene supports all these features.
>

Sorry, I don't get this again. Which features should Lucene support? If you
like to use boosting in exactly the same way you used it before (though I
won't recommend doing so), you can do this. In fact, my implementation
tries to mimic this as much as possible. If you mean something else,
please, clarify.

Also, how does one save document length with more accuracy? Is there a
special API or something?

Thank you!


>
>
> On Mon, Jul 4, 2016 at 1:53 AM, Leo Boytsov  wrote:
> > Hi everybody,
> >
> > Some time ago, I had to re-implement some Lucene similarities (in
> particular
> > BM25 and the older cosine). I noticed that the re-implemented version
> > (despite using the same formula) performed better on my data set. The
> main
> > difference was that my version did not approximate document length.
> >
> > Recently, I have implemented a modification of the current Lucene BM25
> that
> > doesn't use this approximation either. I compared the existing and the
> > modified similarities (again on some of my quirky data sets). The results
> > are as follows:
> >
> > 1) The modified Lucene BM25 similarity is, indeed, a tad slower (3-5% in
> my
> > tests).
> > 2) The modified Lucene BM25 it is also more accurate
> > (I don't see a 

[jira] [Resolved] (SOLR-9180) need better cloud & RTG testing of TestPseudoReturnFields

2016-07-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-9180.

   Resolution: Fixed
Fix Version/s: master (7.0)
   6.2

> need better cloud & RTG testing of TestPseudoReturnFields
> -
>
> Key: SOLR-9180
> URL: https://issues.apache.org/jira/browse/SOLR-9180
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Hoss Man
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9180.patch, SOLR-9180.patch, SOLR-9180.patch
>
>
> on the mailing list, Charles Sanders noted that the {{[explain]}} transformer 
> wasn't working in Solr 5(.5.1) - showing a sample query that indicated he was 
> using SolrCloud.
> In 6.0 and on master this works fine, so whatever bug affects 5.x was fixed 
> at some point -- but we don't appear to have any cloud based tests that 
> invoke {{[explain]}}, so we should add something akin to 
> TestPseudoReturnFields to ensure no regressions in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8858) SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field Loading is Enabled

2016-07-06 Thread Caleb Rackliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365149#comment-15365149
 ] 

Caleb Rackliffe commented on SOLR-8858:
---

bq. If there is no document cache and lazy field loading is disabled, then we 
can pass through the fields requested to the codec instead of getting them all 
right?

That works for me.

> SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field 
> Loading is Enabled
> -
>
> Key: SOLR-8858
> URL: https://issues.apache.org/jira/browse/SOLR-8858
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6, 4.10, 5.5
>Reporter: Caleb Rackliffe
>  Labels: easyfix
>
> If {{enableLazyFieldLoading=false}}, a perfectly valid fields filter will be 
> ignored, and we'll create a {{DocumentStoredFieldVisitor}} without it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9289) SolrCloud RTG: fl=[docid] silently ignored for all docs

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365139#comment-15365139
 ] 

ASF subversion and git services commented on SOLR-9289:
---

Commit ae316f1e39e58d89758f997913a38059d74ccb47 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae316f1 ]

SOLR-9180: More comprehensive tests of psuedo-fields for RTG and SolrCloud 
requests

This commit also includes new @AwaitsFix'ed tests for the following known 
issues...

 * SOLR-9285 ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with 
RTG on uncommitted doc
 * SOLR-9286 SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], 
etc...) silently fails (even for committed doc)
 * SOLR-9287 single node RTG: NPE if score is requested
 * SOLR-9288 RTG: fl=[docid] silently missing for uncommitted docs
 * SOLR-9289 SolrCloud RTG: fl=[docid] silently ignored for all docs


> SolrCloud RTG: fl=[docid] silently ignored for all docs
> ---
>
> Key: SOLR-9289
> URL: https://issues.apache.org/jira/browse/SOLR-9289
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> In SolrCloud mode, the {{\[docid\]}} transformer is completely ignored when 
> used in a RTG request (even for commited docs) ... this is inconsistent with 
> single node solr behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9285) ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with RTG on uncommitted doc

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365135#comment-15365135
 ] 

ASF subversion and git services commented on SOLR-9285:
---

Commit ae316f1e39e58d89758f997913a38059d74ccb47 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae316f1 ]

SOLR-9180: More comprehensive tests of psuedo-fields for RTG and SolrCloud 
requests

This commit also includes new @AwaitsFix'ed tests for the following known 
issues...

 * SOLR-9285 ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with 
RTG on uncommitted doc
 * SOLR-9286 SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], 
etc...) silently fails (even for committed doc)
 * SOLR-9287 single node RTG: NPE if score is requested
 * SOLR-9288 RTG: fl=[docid] silently missing for uncommitted docs
 * SOLR-9289 SolrCloud RTG: fl=[docid] silently ignored for all docs


> ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with RTG on 
> uncommitted doc
> -
>
> Key: SOLR-9285
> URL: https://issues.apache.org/jira/browse/SOLR-9285
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> Even in single node solr envs, doing an RTG for an uncommitted doc that uses 
> ValueSourceAugmenter (ie: simple field aliasing, or functions in fl) causes 
> an ArrayIndexOutOfBoundsException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9180) need better cloud & RTG testing of TestPseudoReturnFields

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365140#comment-15365140
 ] 

ASF subversion and git services commented on SOLR-9180:
---

Commit 1125a8a8efd53f387d10da1658d005db03cf6ccc in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1125a8a ]

Merge remote-tracking branch 'refs/remotes/origin/master' (SOLR-9180)


> need better cloud & RTG testing of TestPseudoReturnFields
> -
>
> Key: SOLR-9180
> URL: https://issues.apache.org/jira/browse/SOLR-9180
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9180.patch, SOLR-9180.patch, SOLR-9180.patch
>
>
> on the mailing list, Charles Sanders noted that the {{[explain]}} transformer 
> wasn't working in Solr 5(.5.1) - showing a sample query that indicated he was 
> using SolrCloud.
> In 6.0 and on master this works fine, so whatever bug affects 5.x was fixed 
> at some point -- but we don't appear to have any cloud based tests that 
> invoke {{[explain]}}, so we should add something akin to 
> TestPseudoReturnFields to ensure no regressions in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9286) SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], etc...) silently fails (even for committed doc)

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365136#comment-15365136
 ] 

ASF subversion and git services commented on SOLR-9286:
---

Commit ae316f1e39e58d89758f997913a38059d74ccb47 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae316f1 ]

SOLR-9180: More comprehensive tests of psuedo-fields for RTG and SolrCloud 
requests

This commit also includes new @AwaitsFix'ed tests for the following known 
issues...

 * SOLR-9285 ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with 
RTG on uncommitted doc
 * SOLR-9286 SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], 
etc...) silently fails (even for committed doc)
 * SOLR-9287 single node RTG: NPE if score is requested
 * SOLR-9288 RTG: fl=[docid] silently missing for uncommitted docs
 * SOLR-9289 SolrCloud RTG: fl=[docid] silently ignored for all docs


> SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], etc...) 
> silently fails (even for committed doc)
> -
>
> Key: SOLR-9286
> URL: https://issues.apache.org/jira/browse/SOLR-9286
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> when using RTG with ValueSourceAugmenter (ie: field aliasing or functions in 
> the fl) in SolrCloud, the request can succeed w/o actually performing the 
> field aliasing and/or ValueSourceAugmenter additions.
> This is inconsistent with single-node solr installs (at least as far as 
> committed docs go, see SOLR-9285 regarding uncommitted docs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9287) single node RTG: NPE if score is requested

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365137#comment-15365137
 ] 

ASF subversion and git services commented on SOLR-9287:
---

Commit ae316f1e39e58d89758f997913a38059d74ccb47 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae316f1 ]

SOLR-9180: More comprehensive tests of psuedo-fields for RTG and SolrCloud 
requests

This commit also includes new @AwaitsFix'ed tests for the following known 
issues...

 * SOLR-9285 ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with 
RTG on uncommitted doc
 * SOLR-9286 SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], 
etc...) silently fails (even for committed doc)
 * SOLR-9287 single node RTG: NPE if score is requested
 * SOLR-9288 RTG: fl=[docid] silently missing for uncommitted docs
 * SOLR-9289 SolrCloud RTG: fl=[docid] silently ignored for all docs


> single node RTG: NPE if score is requested
> --
>
> Key: SOLR-9287
> URL: https://issues.apache.org/jira/browse/SOLR-9287
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> In single node solr setups, if an RTG request is made that includes "score" 
> in the fl, then there is an NPE from ResultContext.wantsScores.
> This does *not* happen if the same request happens in a SolrCloud setup - in 
> that case the request for "score" is silently ignored -- this seems to me 
> like the optimal behavior  (similarly: using the {{\[explain\]}} transformer 
> in the fl for an RTG is currently silently ignored for both single node and 
> solr cloud envs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9288) RTG: fl=[docid] silently missing for uncommitted docs

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365138#comment-15365138
 ] 

ASF subversion and git services commented on SOLR-9288:
---

Commit ae316f1e39e58d89758f997913a38059d74ccb47 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae316f1 ]

SOLR-9180: More comprehensive tests of psuedo-fields for RTG and SolrCloud 
requests

This commit also includes new @AwaitsFix'ed tests for the following known 
issues...

 * SOLR-9285 ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with 
RTG on uncommitted doc
 * SOLR-9286 SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], 
etc...) silently fails (even for committed doc)
 * SOLR-9287 single node RTG: NPE if score is requested
 * SOLR-9288 RTG: fl=[docid] silently missing for uncommitted docs
 * SOLR-9289 SolrCloud RTG: fl=[docid] silently ignored for all docs


> RTG: fl=[docid] silently missing for uncommitted docs
> -
>
> Key: SOLR-9288
> URL: https://issues.apache.org/jira/browse/SOLR-9288
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> when using RTG in a single node solr install, the {{\[docid\]}} transformer 
> works for committed docs, but is silently missing from uncommited docs.
> this inconsistency is confusing.  It seems like even if there is no valid 
> docid to return in this case, the key should still be present in the 
> resulting doc.
> I would suggest using either {{null}} or {{-1}} in this case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9180) need better cloud & RTG testing of TestPseudoReturnFields

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365133#comment-15365133
 ] 

ASF subversion and git services commented on SOLR-9180:
---

Commit f69e624645f62e1f2224f5ddb035379491a7a0ce in lucene-solr's branch 
refs/heads/branch_6x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f69e624 ]

Merge remote-tracking branch 'refs/remotes/origin/branch_6x' into branch_6x 
(SOLR-9180)


> need better cloud & RTG testing of TestPseudoReturnFields
> -
>
> Key: SOLR-9180
> URL: https://issues.apache.org/jira/browse/SOLR-9180
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9180.patch, SOLR-9180.patch, SOLR-9180.patch
>
>
> on the mailing list, Charles Sanders noted that the {{[explain]}} transformer 
> wasn't working in Solr 5(.5.1) - showing a sample query that indicated he was 
> using SolrCloud.
> In 6.0 and on master this works fine, so whatever bug affects 5.x was fixed 
> at some point -- but we don't appear to have any cloud based tests that 
> invoke {{[explain]}}, so we should add something akin to 
> TestPseudoReturnFields to ensure no regressions in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9289) SolrCloud RTG: fl=[docid] silently ignored for all docs

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365132#comment-15365132
 ] 

ASF subversion and git services commented on SOLR-9289:
---

Commit fee9526208375fec6a7651249b182fbca1a29703 in lucene-solr's branch 
refs/heads/branch_6x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fee9526 ]

SOLR-9180: More comprehensive tests of psuedo-fields for RTG and SolrCloud 
requests

This commit also includes new @AwaitsFix'ed tests for the following known 
issues...

 * SOLR-9285 ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with 
RTG on uncommitted doc
 * SOLR-9286 SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], 
etc...) silently fails (even for committed doc)
 * SOLR-9287 single node RTG: NPE if score is requested
 * SOLR-9288 RTG: fl=[docid] silently missing for uncommitted docs
 * SOLR-9289 SolrCloud RTG: fl=[docid] silently ignored for all docs

(cherry picked from commit ae316f1e39e58d89758f997913a38059d74ccb47)


> SolrCloud RTG: fl=[docid] silently ignored for all docs
> ---
>
> Key: SOLR-9289
> URL: https://issues.apache.org/jira/browse/SOLR-9289
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> In SolrCloud mode, the {{\[docid\]}} transformer is completely ignored when 
> used in a RTG request (even for commited docs) ... this is inconsistent with 
> single node solr behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9286) SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], etc...) silently fails (even for committed doc)

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365129#comment-15365129
 ] 

ASF subversion and git services commented on SOLR-9286:
---

Commit fee9526208375fec6a7651249b182fbca1a29703 in lucene-solr's branch 
refs/heads/branch_6x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fee9526 ]

SOLR-9180: More comprehensive tests of psuedo-fields for RTG and SolrCloud 
requests

This commit also includes new @AwaitsFix'ed tests for the following known 
issues...

 * SOLR-9285 ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with 
RTG on uncommitted doc
 * SOLR-9286 SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], 
etc...) silently fails (even for committed doc)
 * SOLR-9287 single node RTG: NPE if score is requested
 * SOLR-9288 RTG: fl=[docid] silently missing for uncommitted docs
 * SOLR-9289 SolrCloud RTG: fl=[docid] silently ignored for all docs

(cherry picked from commit ae316f1e39e58d89758f997913a38059d74ccb47)


> SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], etc...) 
> silently fails (even for committed doc)
> -
>
> Key: SOLR-9286
> URL: https://issues.apache.org/jira/browse/SOLR-9286
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> when using RTG with ValueSourceAugmenter (ie: field aliasing or functions in 
> the fl) in SolrCloud, the request can succeed w/o actually performing the 
> field aliasing and/or ValueSourceAugmenter additions.
> This is inconsistent with single-node solr installs (at least as far as 
> committed docs go, see SOLR-9285 regarding uncommitted docs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9287) single node RTG: NPE if score is requested

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365130#comment-15365130
 ] 

ASF subversion and git services commented on SOLR-9287:
---

Commit fee9526208375fec6a7651249b182fbca1a29703 in lucene-solr's branch 
refs/heads/branch_6x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fee9526 ]

SOLR-9180: More comprehensive tests of psuedo-fields for RTG and SolrCloud 
requests

This commit also includes new @AwaitsFix'ed tests for the following known 
issues...

 * SOLR-9285 ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with 
RTG on uncommitted doc
 * SOLR-9286 SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], 
etc...) silently fails (even for committed doc)
 * SOLR-9287 single node RTG: NPE if score is requested
 * SOLR-9288 RTG: fl=[docid] silently missing for uncommitted docs
 * SOLR-9289 SolrCloud RTG: fl=[docid] silently ignored for all docs

(cherry picked from commit ae316f1e39e58d89758f997913a38059d74ccb47)


> single node RTG: NPE if score is requested
> --
>
> Key: SOLR-9287
> URL: https://issues.apache.org/jira/browse/SOLR-9287
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> In single node solr setups, if an RTG request is made that includes "score" 
> in the fl, then there is an NPE from ResultContext.wantsScores.
> This does *not* happen if the same request happens in a SolrCloud setup - in 
> that case the request for "score" is silently ignored -- this seems to me 
> like the optimal behavior  (similarly: using the {{\[explain\]}} transformer 
> in the fl for an RTG is currently silently ignored for both single node and 
> solr cloud envs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9285) ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with RTG on uncommitted doc

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365128#comment-15365128
 ] 

ASF subversion and git services commented on SOLR-9285:
---

Commit fee9526208375fec6a7651249b182fbca1a29703 in lucene-solr's branch 
refs/heads/branch_6x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fee9526 ]

SOLR-9180: More comprehensive tests of psuedo-fields for RTG and SolrCloud 
requests

This commit also includes new @AwaitsFix'ed tests for the following known 
issues...

 * SOLR-9285 ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with 
RTG on uncommitted doc
 * SOLR-9286 SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], 
etc...) silently fails (even for committed doc)
 * SOLR-9287 single node RTG: NPE if score is requested
 * SOLR-9288 RTG: fl=[docid] silently missing for uncommitted docs
 * SOLR-9289 SolrCloud RTG: fl=[docid] silently ignored for all docs

(cherry picked from commit ae316f1e39e58d89758f997913a38059d74ccb47)


> ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with RTG on 
> uncommitted doc
> -
>
> Key: SOLR-9285
> URL: https://issues.apache.org/jira/browse/SOLR-9285
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> Even in single node solr envs, doing an RTG for an uncommitted doc that uses 
> ValueSourceAugmenter (ie: simple field aliasing, or functions in fl) causes 
> an ArrayIndexOutOfBoundsException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9180) need better cloud & RTG testing of TestPseudoReturnFields

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365134#comment-15365134
 ] 

ASF subversion and git services commented on SOLR-9180:
---

Commit ae316f1e39e58d89758f997913a38059d74ccb47 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ae316f1 ]

SOLR-9180: More comprehensive tests of psuedo-fields for RTG and SolrCloud 
requests

This commit also includes new @AwaitsFix'ed tests for the following known 
issues...

 * SOLR-9285 ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with 
RTG on uncommitted doc
 * SOLR-9286 SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], 
etc...) silently fails (even for committed doc)
 * SOLR-9287 single node RTG: NPE if score is requested
 * SOLR-9288 RTG: fl=[docid] silently missing for uncommitted docs
 * SOLR-9289 SolrCloud RTG: fl=[docid] silently ignored for all docs


> need better cloud & RTG testing of TestPseudoReturnFields
> -
>
> Key: SOLR-9180
> URL: https://issues.apache.org/jira/browse/SOLR-9180
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9180.patch, SOLR-9180.patch, SOLR-9180.patch
>
>
> on the mailing list, Charles Sanders noted that the {{[explain]}} transformer 
> wasn't working in Solr 5(.5.1) - showing a sample query that indicated he was 
> using SolrCloud.
> In 6.0 and on master this works fine, so whatever bug affects 5.x was fixed 
> at some point -- but we don't appear to have any cloud based tests that 
> invoke {{[explain]}}, so we should add something akin to 
> TestPseudoReturnFields to ensure no regressions in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9288) RTG: fl=[docid] silently missing for uncommitted docs

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365131#comment-15365131
 ] 

ASF subversion and git services commented on SOLR-9288:
---

Commit fee9526208375fec6a7651249b182fbca1a29703 in lucene-solr's branch 
refs/heads/branch_6x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fee9526 ]

SOLR-9180: More comprehensive tests of psuedo-fields for RTG and SolrCloud 
requests

This commit also includes new @AwaitsFix'ed tests for the following known 
issues...

 * SOLR-9285 ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with 
RTG on uncommitted doc
 * SOLR-9286 SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], 
etc...) silently fails (even for committed doc)
 * SOLR-9287 single node RTG: NPE if score is requested
 * SOLR-9288 RTG: fl=[docid] silently missing for uncommitted docs
 * SOLR-9289 SolrCloud RTG: fl=[docid] silently ignored for all docs

(cherry picked from commit ae316f1e39e58d89758f997913a38059d74ccb47)


> RTG: fl=[docid] silently missing for uncommitted docs
> -
>
> Key: SOLR-9288
> URL: https://issues.apache.org/jira/browse/SOLR-9288
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> when using RTG in a single node solr install, the {{\[docid\]}} transformer 
> works for committed docs, but is silently missing from uncommited docs.
> this inconsistency is confusing.  It seems like even if there is no valid 
> docid to return in this case, the key should still be present in the 
> resulting doc.
> I would suggest using either {{null}} or {{-1}} in this case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9180) need better cloud & RTG testing of TestPseudoReturnFields

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365127#comment-15365127
 ] 

ASF subversion and git services commented on SOLR-9180:
---

Commit fee9526208375fec6a7651249b182fbca1a29703 in lucene-solr's branch 
refs/heads/branch_6x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fee9526 ]

SOLR-9180: More comprehensive tests of psuedo-fields for RTG and SolrCloud 
requests

This commit also includes new @AwaitsFix'ed tests for the following known 
issues...

 * SOLR-9285 ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with 
RTG on uncommitted doc
 * SOLR-9286 SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], 
etc...) silently fails (even for committed doc)
 * SOLR-9287 single node RTG: NPE if score is requested
 * SOLR-9288 RTG: fl=[docid] silently missing for uncommitted docs
 * SOLR-9289 SolrCloud RTG: fl=[docid] silently ignored for all docs

(cherry picked from commit ae316f1e39e58d89758f997913a38059d74ccb47)


> need better cloud & RTG testing of TestPseudoReturnFields
> -
>
> Key: SOLR-9180
> URL: https://issues.apache.org/jira/browse/SOLR-9180
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9180.patch, SOLR-9180.patch, SOLR-9180.patch
>
>
> on the mailing list, Charles Sanders noted that the {{[explain]}} transformer 
> wasn't working in Solr 5(.5.1) - showing a sample query that indicated he was 
> using SolrCloud.
> In 6.0 and on master this works fine, so whatever bug affects 5.x was fixed 
> at some point -- but we don't appear to have any cloud based tests that 
> invoke {{[explain]}}, so we should add something akin to 
> TestPseudoReturnFields to ensure no regressions in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9207) PeerSync recovery failes if number of updates requested is high

2016-07-06 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-9207.
-
   Resolution: Fixed
Fix Version/s: master (7.0)
   6.2

Thanks Pushkar!

> PeerSync recovery failes if number of updates requested is high
> ---
>
> Key: SOLR-9207
> URL: https://issues.apache.org/jira/browse/SOLR-9207
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1, 6.0
>Reporter: Pushkar Raste
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 6.2, master (7.0)
>
> Attachments: SOLR-9207.patch, SOLR-9207.patch, SOLR-9207.patch_updated
>
>
> {{PeerSync}} recovery fails if we request more than ~99K updates. 
> If update solrconfig to retain more {{tlogs}} to leverage 
> https://issues.apache.org/jira/browse/SOLR-6359
> During out testing we found out that recovery using {{PeerSync}} fails if we 
> ask for more than ~99K updates, with following error
> {code}
>  WARN  PeerSync [RecoveryThread] - PeerSync: core=hold_shard1 url=
> exception talking to , failed
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
> Expected mime type application/octet-stream but got application/xml. 
> 
> 
> application/x-www-form-urlencoded content 
> length (4761994 bytes) exceeds upload limit of 2048 KB t name="code">400
> 
> {code}
> We arrived at ~99K with following match
> * max_version_number = Long.MAX_VALUE = 9223372036854775807  
> * bytes per version number =  20 (on the wire as POST request sends version 
> number as string)
> * additional bytes for separator ,
> * max_versions_in_single_request = 2MB/21 = ~99864
> I could think of 2 ways to fix it
> 1. Ask for about updates in chunks of 90K inside {{PeerSync.requestUpdates()}}
> 2. Use application/octet-stream encoding 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9207) PeerSync recovery failes if number of updates requested is high

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365086#comment-15365086
 ] 

ASF subversion and git services commented on SOLR-9207:
---

Commit a942de68fc34602ad0640a2726fd3dd240352357 in lucene-solr's branch 
refs/heads/branch_6x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a942de6 ]

SOLR-9207: PeerSync recovery failes if number of updates requested is high. A 
new useRangeVersions config option is introduced (defaults to true) to send 
version ranges instead of individual versions for peer sync.
(cherry picked from commit 380c5a6)


> PeerSync recovery failes if number of updates requested is high
> ---
>
> Key: SOLR-9207
> URL: https://issues.apache.org/jira/browse/SOLR-9207
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1, 6.0
>Reporter: Pushkar Raste
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Attachments: SOLR-9207.patch, SOLR-9207.patch, SOLR-9207.patch_updated
>
>
> {{PeerSync}} recovery fails if we request more than ~99K updates. 
> If update solrconfig to retain more {{tlogs}} to leverage 
> https://issues.apache.org/jira/browse/SOLR-6359
> During out testing we found out that recovery using {{PeerSync}} fails if we 
> ask for more than ~99K updates, with following error
> {code}
>  WARN  PeerSync [RecoveryThread] - PeerSync: core=hold_shard1 url=
> exception talking to , failed
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
> Expected mime type application/octet-stream but got application/xml. 
> 
> 
> application/x-www-form-urlencoded content 
> length (4761994 bytes) exceeds upload limit of 2048 KB t name="code">400
> 
> {code}
> We arrived at ~99K with following match
> * max_version_number = Long.MAX_VALUE = 9223372036854775807  
> * bytes per version number =  20 (on the wire as POST request sends version 
> number as string)
> * additional bytes for separator ,
> * max_versions_in_single_request = 2MB/21 = ~99864
> I could think of 2 ways to fix it
> 1. Ask for about updates in chunks of 90K inside {{PeerSync.requestUpdates()}}
> 2. Use application/octet-stream encoding 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365081#comment-15365081
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit ed86e014f61474843a8dc064c912d91d51ff5cba in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ed86e01 ]

SOLR-9193: fixing failing tests due to changes in TermsComponent


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9243) Add terms.list parameter to the TermsComponent to fetch the docFreq for a list of terms

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365079#comment-15365079
 ] 

ASF subversion and git services commented on SOLR-9243:
---

Commit 1427f4b2e7599504dc96c4395fd861ffb8224d26 in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1427f4b ]

SOLR-9243:Add terms.list parameter to the TermsComponent to fetch the docFreq 
for a list of terms


> Add terms.list parameter to the TermsComponent to fetch the docFreq for a 
> list of terms
> ---
>
> Key: SOLR-9243
> URL: https://issues.apache.org/jira/browse/SOLR-9243
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9243.patch, SOLR-9243.patch, SOLR-9243.patch, 
> SOLR-9243.patch, SOLR-9243.patch
>
>
> This ticket will add a terms.list parameter to the TermsComponent to retrieve 
> Terms and docFreq for a specific list of Terms.
> This is needed to support SOLR-9193 which needs to fetch the docFreq for a 
> list of Terms.
> This should also be useful as a general tool for fetching docFreq given a 
> list of Terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365083#comment-15365083
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit 7a5e6a5f7e479b0950cf0d26484f8789c5aa5fcf in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7a5e6a5 ]

SOLR-9193: Added test using the termFreq param and basic error handling


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365082#comment-15365082
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit bc0eac8b6b95bfc4d6cfa612b494fc184cee1a8c in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bc0eac8 ]

SOLR-9193: Fix conflict between parameters of TermsComponent and json facet API


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365084#comment-15365084
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit e27849052ebd7d2314560eb5a1704ca33d442565 in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e278490 ]

SOLR-9193: Added terms.limit and distrib=true params to /terms request


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365080#comment-15365080
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit 879a245e4e0b63edaa240e1e138223dd9e86b301 in lucene-solr's branch 
refs/heads/branch_6x from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=879a245 ]

SOLR-9193: Add scoreNodes Streaming Expression

Conflicts:
solr/core/src/java/org/apache/solr/handler/StreamHandler.java


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9207) PeerSync recovery failes if number of updates requested is high

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365072#comment-15365072
 ] 

ASF subversion and git services commented on SOLR-9207:
---

Commit 380c5a6b9727beabb8ccce04add7e8e7089aa798 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=380c5a6 ]

SOLR-9207: PeerSync recovery failes if number of updates requested is high. A 
new useRangeVersions config option is introduced (defaults to true) to send 
version ranges instead of individual versions for peer sync.


> PeerSync recovery failes if number of updates requested is high
> ---
>
> Key: SOLR-9207
> URL: https://issues.apache.org/jira/browse/SOLR-9207
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1, 6.0
>Reporter: Pushkar Raste
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Attachments: SOLR-9207.patch, SOLR-9207.patch, SOLR-9207.patch_updated
>
>
> {{PeerSync}} recovery fails if we request more than ~99K updates. 
> If update solrconfig to retain more {{tlogs}} to leverage 
> https://issues.apache.org/jira/browse/SOLR-6359
> During out testing we found out that recovery using {{PeerSync}} fails if we 
> ask for more than ~99K updates, with following error
> {code}
>  WARN  PeerSync [RecoveryThread] - PeerSync: core=hold_shard1 url=
> exception talking to , failed
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
> Expected mime type application/octet-stream but got application/xml. 
> 
> 
> application/x-www-form-urlencoded content 
> length (4761994 bytes) exceeds upload limit of 2048 KB t name="code">400
> 
> {code}
> We arrived at ~99K with following match
> * max_version_number = Long.MAX_VALUE = 9223372036854775807  
> * bytes per version number =  20 (on the wire as POST request sends version 
> number as string)
> * additional bytes for separator ,
> * max_versions_in_single_request = 2MB/21 = ~99864
> I could think of 2 ways to fix it
> 1. Ask for about updates in chunks of 90K inside {{PeerSync.requestUpdates()}}
> 2. Use application/octet-stream encoding 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9207) PeerSync recovery failes if number of updates requested is high

2016-07-06 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364990#comment-15364990
 ] 

Shalin Shekhar Mangar edited comment on SOLR-9207 at 7/6/16 9:00 PM:
-

Changes:
# The value for useRangeVersions being set in solrconfig.xml wasn't being read 
at all because it was written in solrconfig.xml with the element 'str' but it 
was being read as 'useRangeVersions'. I changed the element name in 
configuration to useRangeVersions to make it work.
# The value for useRangeVersions should be in EditableSolrConfigAttributes.json 
so that it can be changed via the config API
# Similarly, useRangeVersions should be returned in SolrConfig.toMap so that 
its value is returned by the config API
# System property set in SolrTestCase4J for useRangeVersions should be cleared 
in the tear down method

I'll run precommit + tests and commit if there are no surprises.


was (Author: shalinmangar):
Changes:
# The value for useRangeVersions being set in solrconfig.xml wasn't being read 
at all because it was written in solrconfig.xml with the element 'bool' but it 
was being read as 'useRangeVersions'. I changed the element name in 
configuration to useRangeVersions to make it work.
# The value for useRangeVersions should be in EditableSolrConfigAttributes.json 
so that it can be changed via the config API
# Similarly, useRangeVersions should be returned in SolrConfig.toMap so that 
its value is returned by the config API
# System property set in SolrTestCase4J for useRangeVersions should be cleared 
in the tear down method

I'll run precommit + tests and commit if there are no surprises.

> PeerSync recovery failes if number of updates requested is high
> ---
>
> Key: SOLR-9207
> URL: https://issues.apache.org/jira/browse/SOLR-9207
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1, 6.0
>Reporter: Pushkar Raste
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Attachments: SOLR-9207.patch, SOLR-9207.patch, SOLR-9207.patch_updated
>
>
> {{PeerSync}} recovery fails if we request more than ~99K updates. 
> If update solrconfig to retain more {{tlogs}} to leverage 
> https://issues.apache.org/jira/browse/SOLR-6359
> During out testing we found out that recovery using {{PeerSync}} fails if we 
> ask for more than ~99K updates, with following error
> {code}
>  WARN  PeerSync [RecoveryThread] - PeerSync: core=hold_shard1 url=
> exception talking to , failed
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
> Expected mime type application/octet-stream but got application/xml. 
> 
> 
> application/x-www-form-urlencoded content 
> length (4761994 bytes) exceeds upload limit of 2048 KB t name="code">400
> 
> {code}
> We arrived at ~99K with following match
> * max_version_number = Long.MAX_VALUE = 9223372036854775807  
> * bytes per version number =  20 (on the wire as POST request sends version 
> number as string)
> * additional bytes for separator ,
> * max_versions_in_single_request = 2MB/21 = ~99864
> I could think of 2 ways to fix it
> 1. Ask for about updates in chunks of 90K inside {{PeerSync.requestUpdates()}}
> 2. Use application/octet-stream encoding 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7287) New lemma-tizer plugin for ukrainian language.

2016-07-06 Thread Andriy Rysin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15365063#comment-15365063
 ] 

Andriy Rysin commented on LUCENE-7287:
--

Thanks Michael, much appreciated!

> New lemma-tizer plugin for ukrainian language.
> --
>
> Key: LUCENE-7287
> URL: https://issues.apache.org/jira/browse/LUCENE-7287
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: Dmytro Hambal
>Priority: Minor
>  Labels: analysis, language, plugin
> Fix For: master (7.0), 6.2
>
> Attachments: LUCENE-7287.patch, Screen Shot 2016-06-23 at 8.23.01 
> PM.png, Screen Shot 2016-06-23 at 8.41.28 PM.png
>
>
> Hi all,
> I wonder whether you are interested in supporting a plugin which provides a 
> mapping between ukrainian word forms and their lemmas. Some tests and docs go 
> out-of-the-box =) .
> https://github.com/mrgambal/elasticsearch-ukrainian-lemmatizer
> It's really simple but still works and generates some value for its users.
> More: https://github.com/elastic/elasticsearch/issues/18303



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 248 - Still Failing!

2016-07-06 Thread David Smiley
Woops; good catch!  I was confused when I deconflicted the cherry-pick.
It's nice our smoke tester catches this stuff :-)  I'll go fix now.

On Wed, Jul 6, 2016 at 4:24 PM Steve Rowe  wrote:

> Hi David,
>
> Looks like you meant to move the Bug Fixes section in Lucene’s CHANGES.txt
> down below the New Features section, but you left the original at the top
> when you created the new one?
>
> --
> Steve
> www.lucidworks.com
>
> > On Jul 6, 2016, at 4:17 PM, Policeman Jenkins Server <
> jenk...@thetaphi.de> wrote:
> >
> > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/248/
> > Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC
> >
> > All tests passed
> >
> > Build Log:
> > [...truncated 53916 lines...]
> > changes-to-html:
> >[mkdir] Created dir:
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build/docs/changes
> >  [get] Getting:
> https://issues.apache.org/jira/rest/api/2/project/LUCENE
> >  [get] To:
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build/docs/changes/jiraVersionList.json
> > [exec] Section 'Bug Fixes' appears more than once under release
> '6.2.0' at
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/site/changes/
> changes2html.pl line 135.
> >
> > BUILD FAILED
> > /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/build.xml:740:
> The following error occurred while executing this line:
> > /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/build.xml:101:
> The following error occurred while executing this line:
> >
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build.xml:138:
> The following error occurred while executing this line:
> >
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build.xml:479:
> The following error occurred while executing this line:
> >
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/common-build.xml:2498:
> exec returned: 255
> >
> > Total time: 95 minutes 20 seconds
> > Build step 'Invoke Ant' marked build as failure
> > Archiving artifacts
> > [WARNINGS] Skipping publisher since build result is FAILURE
> > Recording test results
> > Email was triggered for: Failure - Any
> > Sending email for trigger: Failure - Any
> >
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 248 - Still Failing!

2016-07-06 Thread Steve Rowe
Hi David,

Looks like you meant to move the Bug Fixes section in Lucene’s CHANGES.txt down 
below the New Features section, but you left the original at the top when you 
created the new one?

--
Steve
www.lucidworks.com

> On Jul 6, 2016, at 4:17 PM, Policeman Jenkins Server  
> wrote:
> 
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/248/
> Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC
> 
> All tests passed
> 
> Build Log:
> [...truncated 53916 lines...]
> changes-to-html:
>[mkdir] Created dir: 
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build/docs/changes
>  [get] Getting: https://issues.apache.org/jira/rest/api/2/project/LUCENE
>  [get] To: 
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build/docs/changes/jiraVersionList.json
> [exec] Section 'Bug Fixes' appears more than once under release '6.2.0' 
> at 
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/site/changes/changes2html.pl
>  line 135.
> 
> BUILD FAILED
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/build.xml:740: The 
> following error occurred while executing this line:
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/build.xml:101: The 
> following error occurred while executing this line:
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build.xml:138: 
> The following error occurred while executing this line:
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build.xml:479: 
> The following error occurred while executing this line:
> /export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/common-build.xml:2498:
>  exec returned: 255
> 
> Total time: 95 minutes 20 seconds
> Build step 'Invoke Ant' marked build as failure
> Archiving artifacts
> [WARNINGS] Skipping publisher since build result is FAILURE
> Recording test results
> Email was triggered for: Failure - Any
> Sending email for trigger: Failure - Any
> 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 248 - Still Failing!

2016-07-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/248/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 53916 lines...]
changes-to-html:
[mkdir] Created dir: 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build/docs/changes
  [get] Getting: https://issues.apache.org/jira/rest/api/2/project/LUCENE
  [get] To: 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build/docs/changes/jiraVersionList.json
 [exec] Section 'Bug Fixes' appears more than once under release '6.2.0' at 
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/site/changes/changes2html.pl
 line 135.

BUILD FAILED
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/build.xml:740: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/build.xml:101: The 
following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build.xml:138: 
The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/build.xml:479: 
The following error occurred while executing this line:
/export/home/jenkins/workspace/Lucene-Solr-6.x-Solaris/lucene/common-build.xml:2498:
 exec returned: 255

Total time: 95 minutes 20 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
[WARNINGS] Skipping publisher since build result is FAILURE
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Artifacts-6.x - Build # 106 - Failure

2016-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-6.x/106/

No tests ran.

Build Log:
[...truncated 8070 lines...]
changes-to-html:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/build/docs/changes
  [get] Getting: https://issues.apache.org/jira/rest/api/2/project/LUCENE
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/build/docs/changes/jiraVersionList.json
 [exec] Section 'Bug Fixes' appears more than once under release '6.2.0' at 
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/site/changes/changes2html.pl
 line 135.

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/build.xml:479: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Artifacts-6.x/lucene/common-build.xml:2498:
 exec returned: 255

Total time: 4 minutes 1 second
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Publishing Javadoc
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read

2016-07-06 Thread Susheel Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364991#comment-15364991
 ] 

Susheel Kumar commented on SOLR-8146:
-

Hello Noble, Arcadius,

Can you please describe how exactly ImplicitSnitch can be used for 
preferredNodes and if there is anything to be done on SolrJ client to use 
preferredNodes for querying replicas?

I have created a JIRA  https://issues.apache.org/jira/browse/SOLR-9283 to 
document the exact steps/details for anyone to refer.

Thanks,
Susheel

> Allowing SolrJ CloudSolrClient to have preferred replica for query/read
> ---
>
> Key: SOLR-8146
> URL: https://issues.apache.org/jira/browse/SOLR-8146
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 5.3
>Reporter: Arcadius Ahouansou
> Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch
>
>
> h2. Backgrouds
> Currently, the CloudSolrClient randomly picks a replica to query.
> This is done by shuffling the list of live URLs to query then, picking the 
> first item from the list.
> This ticket is to allow more flexibility and control to some extend which 
> URLs will be picked up for queries.
> Note that this is for queries only and would not affect update/delete/admin 
> operations.
> h2. Implementation
> The current patch uses regex pattern and moves to the top of the list of URLs 
> only those matching the given regex specified by the system property 
> {code}solr.preferredQueryNodePattern{code}
> Initially, I thought it may be good to have Solr nodes tagged with a string 
> pattern (snitch?) and use that pattern for matching the URLs.
> Any comment, recommendation or feedback would be appreciated.
> h2. Use Cases
> There are many cases where the ability to choose the node where queries go 
> can be very handy:
> h3. Special node for manual user queries and analytics
> One may have a SolrCLoud cluster where every node host the same set of 
> collections with:  
> - multiple large SolrCLoud nodes (L) used for production apps and 
> - have 1 small node (S) in the same cluster with less ram/cpu used only for 
> manual user queries, data export and other production issue investigation.
> This ticket would allow to configure the applications using SolrJ to query 
> only the (L) nodes
> This use case is similar to the one described in SOLR-5501 raised by [~manuel 
> lenormand]
> h3. Minimizing network traffic
>  
> For simplicity, let's say that we have  a SolrSloud cluster deployed on 2 (or 
> N) separate racks: rack1 and rack2.
> On each rack, we have a set of SolrCloud VMs as well as a couple of client 
> VMs querying solr using SolrJ.
> All solr nodes are identical and have the same number of collections.
> What we would like to achieve is:
> - clients on rack1 will by preference query only SolrCloud nodes on rack1, 
> and 
> - clients on rack2 will by preference query only SolrCloud nodes on rack2.
> - Cross-rack read will happen if and only if one of the racks has no 
> available Solr node to serve a request.
> In other words, we want read operations to be local to a rack whenever 
> possible.
> Note that write/update/delete/admin operations should not be affected.
> Note that in our use case, we have a cross DC deployment. So, replace 
> rack1/rack2 by DC1/DC2
> Any comment would be very appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_92) - Build # 1074 - Failure!

2016-07-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/1074/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.schema.TestManagedSchemaAPI

Error Message:
ObjectTracker found 8 object(s) that were not released!!! 
[MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
MDCAwareThreadPoolExecutor, MockDirectoryWrapper, MockDirectoryWrapper, 
TransactionLog, TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 8 object(s) that were not 
released!!! [MockDirectoryWrapper, MDCAwareThreadPoolExecutor, 
MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
MockDirectoryWrapper, TransactionLog, TransactionLog]
at __randomizedtesting.SeedInfo.seed([84B1357BCCE6840E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11289 lines...]
   [junit4] Suite: org.apache.solr.schema.TestManagedSchemaAPI
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J2/temp/solr.schema.TestManagedSchemaAPI_84B1357BCCE6840E-001/init-core-data-001
   [junit4]   2> 854284 INFO  
(SUITE-TestManagedSchemaAPI-seed#[84B1357BCCE6840E]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 854286 INFO  
(SUITE-TestManagedSchemaAPI-seed#[84B1357BCCE6840E]-worker) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 854286 INFO  (Thread-1786) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 854286 INFO  (Thread-1786) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 854386 INFO  
(SUITE-TestManagedSchemaAPI-seed#[84B1357BCCE6840E]-worker) [] 
o.a.s.c.ZkTestServer start zk server on port:46050
   [junit4]   2> 854386 INFO  
(SUITE-TestManagedSchemaAPI-seed#[84B1357BCCE6840E]-worker) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 854387 INFO  
(SUITE-TestManagedSchemaAPI-seed#[84B1357BCCE6840E]-worker) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 854389 INFO  (zkCallback-1203-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@859a7a name:ZooKeeperConnection 
Watcher:127.0.0.1:46050 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2> 

[jira] [Updated] (SOLR-9207) PeerSync recovery failes if number of updates requested is high

2016-07-06 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9207?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-9207:

Attachment: SOLR-9207.patch

Changes:
# The value for useRangeVersions being set in solrconfig.xml wasn't being read 
at all because it was written in solrconfig.xml with the element 'bool' but it 
was being read as 'useRangeVersions'. I changed the element name in 
configuration to useRangeVersions to make it work.
# The value for useRangeVersions should be in EditableSolrConfigAttributes.json 
so that it can be changed via the config API
# Similarly, useRangeVersions should be returned in SolrConfig.toMap so that 
its value is returned by the config API
# System property set in SolrTestCase4J for useRangeVersions should be cleared 
in the tear down method

I'll run precommit + tests and commit if there are no surprises.

> PeerSync recovery failes if number of updates requested is high
> ---
>
> Key: SOLR-9207
> URL: https://issues.apache.org/jira/browse/SOLR-9207
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.1, 6.0
>Reporter: Pushkar Raste
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Attachments: SOLR-9207.patch, SOLR-9207.patch, SOLR-9207.patch_updated
>
>
> {{PeerSync}} recovery fails if we request more than ~99K updates. 
> If update solrconfig to retain more {{tlogs}} to leverage 
> https://issues.apache.org/jira/browse/SOLR-6359
> During out testing we found out that recovery using {{PeerSync}} fails if we 
> ask for more than ~99K updates, with following error
> {code}
>  WARN  PeerSync [RecoveryThread] - PeerSync: core=hold_shard1 url=
> exception talking to , failed
> org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
> Expected mime type application/octet-stream but got application/xml. 
> 
> 
> application/x-www-form-urlencoded content 
> length (4761994 bytes) exceeds upload limit of 2048 KB t name="code">400
> 
> {code}
> We arrived at ~99K with following match
> * max_version_number = Long.MAX_VALUE = 9223372036854775807  
> * bytes per version number =  20 (on the wire as POST request sends version 
> number as string)
> * additional bytes for separator ,
> * max_versions_in_single_request = 2MB/21 = ~99864
> I could think of 2 ways to fix it
> 1. Ask for about updates in chunks of 90K inside {{PeerSync.requestUpdates()}}
> 2. Use application/octet-stream encoding 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 319 - Still Failing

2016-07-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/319/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CleanupOldIndexTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([5CCFE3B43C13FA79]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.schema.TestBulkSchemaConcurrent

Error Message:
ObjectTracker found 2 object(s) that were not released!!! [NRTCachingDirectory, 
NRTCachingDirectory, SolrCore]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 2 object(s) that were not 
released!!! [NRTCachingDirectory, NRTCachingDirectory, SolrCore]
at __randomizedtesting.SeedInfo.seed([5CCFE3B43C13FA79]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:257)
at sun.reflect.GeneratedMethodAccessor36.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12083 lines...]
   [junit4] Suite: org.apache.solr.schema.TestBulkSchemaConcurrent
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/solr/build/solr-core/test/J1/temp/solr.schema.TestBulkSchemaConcurrent_5CCFE3B43C13FA79-001/init-core-data-001
   [junit4]   2> 2402807 INFO  
(SUITE-TestBulkSchemaConcurrent-seed#[5CCFE3B43C13FA79]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 2402808 INFO  
(SUITE-TestBulkSchemaConcurrent-seed#[5CCFE3B43C13FA79]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /gb_r/
   [junit4]   2> 2402811 INFO  
(TEST-TestBulkSchemaConcurrent.test-seed#[5CCFE3B43C13FA79]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2402811 INFO  (Thread-6819) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2402811 INFO  (Thread-6819) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2402911 INFO  
(TEST-TestBulkSchemaConcurrent.test-seed#[5CCFE3B43C13FA79]) [] 
o.a.s.c.ZkTestServer start zk server on port:36890
   [junit4]   2> 2402911 INFO  
(TEST-TestBulkSchemaConcurrent.test-seed#[5CCFE3B43C13FA79]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2402918 INFO  
(TEST-TestBulkSchemaConcurrent.test-seed#[5CCFE3B43C13FA79]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 2402920 INFO  (zkCallback-3299-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 

[jira] [Updated] (SOLR-9180) need better cloud & RTG testing of TestPseudoReturnFields

2016-07-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9180:
---
Attachment: SOLR-9180.patch

updated patch with all nocommits updated to point at new jiras (linked tothis 
one)

A few nocommits related to an idea i had to improve the test further, but i 
just left those as TODO since i don't want to tackle that until/unless the rest 
of the known bugs get resolved (i don't want to risk introducing test bugs 
before the code bugs are resolved)

plan to commit this patch as is soon unless anyone spots any flaws?

> need better cloud & RTG testing of TestPseudoReturnFields
> -
>
> Key: SOLR-9180
> URL: https://issues.apache.org/jira/browse/SOLR-9180
> Project: Solr
>  Issue Type: Test
>Reporter: Hoss Man
>Assignee: Hoss Man
> Attachments: SOLR-9180.patch, SOLR-9180.patch, SOLR-9180.patch
>
>
> on the mailing list, Charles Sanders noted that the {{[explain]}} transformer 
> wasn't working in Solr 5(.5.1) - showing a sample query that indicated he was 
> using SolrCloud.
> In 6.0 and on master this works fine, so whatever bug affects 5.x was fixed 
> at some point -- but we don't appear to have any cloud based tests that 
> invoke {{[explain]}}, so we should add something akin to 
> TestPseudoReturnFields to ensure no regressions in the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9289) SolrCloud RTG: fl=[docid] silently ignored for all docs

2016-07-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364928#comment-15364928
 ] 

Hoss Man commented on SOLR-9289:


NOTE: it's possible this is just a subset/dup of SOLR-9286

> SolrCloud RTG: fl=[docid] silently ignored for all docs
> ---
>
> Key: SOLR-9289
> URL: https://issues.apache.org/jira/browse/SOLR-9289
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> In SolrCloud mode, the {{\[docid\]}} transformer is completely ignored when 
> used in a RTG request (even for commited docs) ... this is inconsistent with 
> single node solr behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9286) SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], etc...) silently fails (even for committed doc)

2016-07-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9286:
---
Summary: SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], 
etc...) silently fails (even for committed doc)  (was: SolrCloud RTG: 
ValueSourceAugmenter silently fails (even for committed doc))

> SolrCloud RTG: psuedo-fields (like ValueSourceAugmenter, [shard], etc...) 
> silently fails (even for committed doc)
> -
>
> Key: SOLR-9286
> URL: https://issues.apache.org/jira/browse/SOLR-9286
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> when using RTG with ValueSourceAugmenter (ie: field aliasing or functions in 
> the fl) in SolrCloud, the request can succeed w/o actually performing the 
> field aliasing and/or ValueSourceAugmenter additions.
> This is inconsistent with single-node solr installs (at least as far as 
> committed docs go, see SOLR-9285 regarding uncommitted docs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9286) SolrCloud RTG: ValueSourceAugmenter silently fails (even for committed doc)

2016-07-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364844#comment-15364844
 ] 

Hoss Man edited comment on SOLR-9286 at 7/6/16 7:17 PM:


See also...

* TestCloudPseudoReturnFields.testFunctionsRTG
* TestCloudPseudoReturnFields.testFunctionsAndExplicitRTG
* TestCloudPseudoReturnFields.testFunctionsAndScoreRTG
* TestCloudPseudoReturnFields.testAugmentersRTG
* TestCloudPseudoReturnFields.testAugmentersAndScoreRTG


was (Author: hossman):
See also...

* TestCloudPseudoReturnFields.testFunctionsRTG
* TestCloudPseudoReturnFields.testFunctionsAndExplicitRTG
* TestCloudPseudoReturnFields.testFunctionsAndScoreRTG

> SolrCloud RTG: ValueSourceAugmenter silently fails (even for committed doc)
> ---
>
> Key: SOLR-9286
> URL: https://issues.apache.org/jira/browse/SOLR-9286
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> when using RTG with ValueSourceAugmenter (ie: field aliasing or functions in 
> the fl) in SolrCloud, the request can succeed w/o actually performing the 
> field aliasing and/or ValueSourceAugmenter additions.
> This is inconsistent with single-node solr installs (at least as far as 
> committed docs go, see SOLR-9285 regarding uncommitted docs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8858) SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field Loading is Enabled

2016-07-06 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364916#comment-15364916
 ] 

Shalin Shekhar Mangar commented on SOLR-8858:
-

bq. If there is no document cache and lazy field loading is disabled, then we 
can pass through the fields requested to the codec instead of getting them all 
right? That wouldn't break anything nor add inefficiencies that aren't inherent 
with a user opting out of these 2 optimizations.

+1 to that. That should work for Caleb's use-case as well.

> SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field 
> Loading is Enabled
> -
>
> Key: SOLR-8858
> URL: https://issues.apache.org/jira/browse/SOLR-8858
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6, 4.10, 5.5
>Reporter: Caleb Rackliffe
>  Labels: easyfix
>
> If {{enableLazyFieldLoading=false}}, a perfectly valid fields filter will be 
> ignored, and we'll create a {{DocumentStoredFieldVisitor}} without it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9289) SolrCloud RTG: fl=[docid] silently ignored for all docs

2016-07-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364913#comment-15364913
 ] 

Hoss Man commented on SOLR-9289:


Note: should not attempt to fix this issue this until SOLR-9288 is resolved.

> SolrCloud RTG: fl=[docid] silently ignored for all docs
> ---
>
> Key: SOLR-9289
> URL: https://issues.apache.org/jira/browse/SOLR-9289
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> In SolrCloud mode, the {{\[docid\]}} transformer is completely ignored when 
> used in a RTG request (even for commited docs) ... this is inconsistent with 
> single node solr behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9289) SolrCloud RTG: fl=[docid] silently ignored for all docs

2016-07-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364910#comment-15364910
 ] 

Hoss Man commented on SOLR-9289:


test code (note that "42" is a committed doc)...

{code}
// behavior shouldn't matter if we are committed or uncommitted
for (String id : Arrays.asList("42","99")) {
  SolrDocument doc = getRandClient(random()).getById(id, 
params("fl","[docid]"));
  String msg = id + ": fl=[docid] => " + doc;
  assertEquals(msg, 1, doc.size());
  assertTrue(msg, doc.getFieldValue("[docid]") instanceof Integer);
}
{code}

Current failure...

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestCloudPseudoReturnFields -Dtests.method=testDocIdAugmenterRTG 
-Dtests.seed=89C42C6FF21F186A -Dtests.slow=true -Dtests.locale=sv 
-Dtests.timezone=Africa/Dakar -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.05s J0 | 
TestCloudPseudoReturnFields.testDocIdAugmenterRTG <<<
   [junit4]> Throwable #1: java.lang.AssertionError: 42: fl=[docid] => 
SolrDocument{} expected:<1> but was:<0>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([89C42C6FF21F186A:1183F970114BC110]:0)
   [junit4]>at 
org.apache.solr.cloud.TestCloudPseudoReturnFields.testDocIdAugmenterRTG(TestCloudPseudoReturnFields.java:590)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
{noformat}

> SolrCloud RTG: fl=[docid] silently ignored for all docs
> ---
>
> Key: SOLR-9289
> URL: https://issues.apache.org/jira/browse/SOLR-9289
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> In SolrCloud mode, the {{\[docid\]}} transformer is completely ignored when 
> used in a RTG request (even for commited docs) ... this is inconsistent with 
> single node solr behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8858) SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field Loading is Enabled

2016-07-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364908#comment-15364908
 ] 

David Smiley commented on SOLR-8858:


If there is no document cache and lazy field loading is disabled, then we can 
pass through the fields requested to the codec instead of getting them all 
right?  That wouldn't break anything nor add inefficiencies that aren't 
inherent with a user opting out of these 2 optimizations.

> SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field 
> Loading is Enabled
> -
>
> Key: SOLR-8858
> URL: https://issues.apache.org/jira/browse/SOLR-8858
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6, 4.10, 5.5
>Reporter: Caleb Rackliffe
>  Labels: easyfix
>
> If {{enableLazyFieldLoading=false}}, a perfectly valid fields filter will be 
> ignored, and we'll create a {{DocumentStoredFieldVisitor}} without it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9289) SolrCloud RTG: fl=[docid] silently ignored for all docs

2016-07-06 Thread Hoss Man (JIRA)
Hoss Man created SOLR-9289:
--

 Summary: SolrCloud RTG: fl=[docid] silently ignored for all docs
 Key: SOLR-9289
 URL: https://issues.apache.org/jira/browse/SOLR-9289
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


Found in SOLR-9180 testing.

In SolrCloud mode, the {{\[docid\]}} transformer is completely ignored when 
used in a RTG request (even for commited docs) ... this is inconsistent with 
single node solr behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9288) RTG: fl=[docid] silently missing for uncommitted docs

2016-07-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364906#comment-15364906
 ] 

Hoss Man commented on SOLR-9288:


See TestPseudoReturnFields.testDocIdAugmenterRTG for example...

{code}
// behavior shouldn't matter if we are committed or uncommitted
for (String id : Arrays.asList("42","99")) {
  assertQ(id + ": fl=[docid]",
  req("qt","/get","id",id, "wt","xml", "fl","[docid]")
  ,"count(//doc)=1"
  ,"//doc/int[@name='[docid]']"
  ,"//doc[count(*)=1]"
  );
}
{code}

current failure...

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestPseudoReturnFields -Dtests.method=testDocIdAugmenterRTG 
-Dtests.seed=98335D83793D2329 -Dtests.slow=true -Dtests.locale=bg 
-Dtests.timezone=Pacific/Enderbury -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.08s J1 | TestPseudoReturnFields.testDocIdAugmenterRTG <<<
   [junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
query
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([98335D83793D2329:74889C9A69FA53]:0)
   [junit4]>at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:780)
   [junit4]>at 
org.apache.solr.search.TestPseudoReturnFields.testDocIdAugmenterRTG(TestPseudoReturnFields.java:554)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]> Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=//doc/int[@name='[docid]']
   [junit4]>xml response was: 
   [junit4]> 
   [junit4]> 
   [junit4]> 
   [junit4]>request was:qt=/get=[docid]=99=xml
   [junit4]>at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:773)
{noformat}

NOTE: whatever solution is decided on, once the inconsistency is resolved, 
there are a lot of other test methods in TestPseudoReturnFields that can be 
updated to also exercise {{\[docid\]}}

> RTG: fl=[docid] silently missing for uncommitted docs
> -
>
> Key: SOLR-9288
> URL: https://issues.apache.org/jira/browse/SOLR-9288
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> when using RTG in a single node solr install, the {{\[docid\]}} transformer 
> works for committed docs, but is silently missing from uncommited docs.
> this inconsistency is confusing.  It seems like even if there is no valid 
> docid to return in this case, the key should still be present in the 
> resulting doc.
> I would suggest using either {{null}} or {{-1}} in this case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364898#comment-15364898
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit 12741cc933b57bbddc20d10ebca3dd776703498b in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=12741cc ]

SOLR-9193: Added test using the termFreq param and basic error handling


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364895#comment-15364895
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit e1f51a20d74daec2521ad8945a9f642f568147aa in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e1f51a2 ]

SOLR-9193: Add scoreNodes Streaming Expression


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364899#comment-15364899
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit c47344195860750cb5758c1cf1f43b8c26cd3260 in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c473441 ]

SOLR-9193: Added terms.limit and distrib=true params to /terms request


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364896#comment-15364896
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit 360c4da90b8a416b369f49bc948bfd20338ff39d in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=360c4da ]

SOLR-9193: fixing failing tests due to changes in TermsComponent


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9193) Add scoreNodes Streaming Expression

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364897#comment-15364897
 ] 

ASF subversion and git services commented on SOLR-9193:
---

Commit ad8b22d0b2a05425fbd51bd01ddb621a1ebe98b4 in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ad8b22d ]

SOLR-9193: Fix conflict between parameters of TermsComponent and json facet API


> Add scoreNodes Streaming Expression
> ---
>
> Key: SOLR-9193
> URL: https://issues.apache.org/jira/browse/SOLR-9193
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9193.patch
>
>
> The scoreNodes Streaming Expression is another *GraphExpression*. It will 
> decorate a gatherNodes expression and use a tf-idf scoring algorithm to score 
> the nodes.
> The gatherNodes expression only gathers nodes and aggregations. This is 
> similar in nature to tf in search ranking, where the number of times a node 
> appears in the traversal represents the tf. But this skews recommendations 
> towards nodes that appear frequently in the index.
> Using the idf for each node we can score each node as a function of tf-idf. 
> This will provide a boost to nodes that appear less frequently in the index. 
> The scoreNodes expression will gather the idf's from the shards for each node 
> emitted by the underlying gatherNodes expression. It will then assign the 
> score to each node. 
> The computed score will be added to each node in the *nodeScore* field. The 
> docFreq of the node across the entire collection will be added to each node 
> in the *docFreq* field. Other streaming expressions can then perform a 
> ranking based on the nodeScore or compute their own score using the nodeFreq.
> proposed syntax:
> {code}
> top(n="10",
>   sort="nodeScore desc",
>   scoreNodes(gatherNodes(...))) 
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9243) Add terms.list parameter to the TermsComponent to fetch the docFreq for a list of terms

2016-07-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364894#comment-15364894
 ] 

ASF subversion and git services commented on SOLR-9243:
---

Commit 551bdc6f538a7f7385975bc6bd1bce103518cc1a in lucene-solr's branch 
refs/heads/master from jbernste
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=551bdc6 ]

SOLR-9243:Add terms.list parameter to the TermsComponent to fetch the docFreq 
for a list of terms


> Add terms.list parameter to the TermsComponent to fetch the docFreq for a 
> list of terms
> ---
>
> Key: SOLR-9243
> URL: https://issues.apache.org/jira/browse/SOLR-9243
> Project: Solr
>  Issue Type: Improvement
>  Components: SearchComponents - other
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 6.2
>
> Attachments: SOLR-9243.patch, SOLR-9243.patch, SOLR-9243.patch, 
> SOLR-9243.patch, SOLR-9243.patch
>
>
> This ticket will add a terms.list parameter to the TermsComponent to retrieve 
> Terms and docFreq for a specific list of Terms.
> This is needed to support SOLR-9193 which needs to fetch the docFreq for a 
> list of Terms.
> This should also be useful as a general tool for fetching docFreq given a 
> list of Terms.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9288) RTG: fl=[docid] silently missing for uncommitted docs

2016-07-06 Thread Hoss Man (JIRA)
Hoss Man created SOLR-9288:
--

 Summary: RTG: fl=[docid] silently missing for uncommitted docs
 Key: SOLR-9288
 URL: https://issues.apache.org/jira/browse/SOLR-9288
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


Found in SOLR-9180 testing.

when using RTG in a single node solr install, the {{\[docid\]}} transformer 
works for committed docs, but is silently missing from uncommited docs.

this inconsistency is confusing.  It seems like even if there is no valid docid 
to return in this case, the key should still be present in the resulting doc.

I would suggest using either {{null}} or {{-1}} in this case?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9273) Share and reuse config set in a node

2016-07-06 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364879#comment-15364879
 ] 

Scott Blum commented on SOLR-9273:
--

Yeah, that's why I'm saying map name + version => content hash to avoid 
re-fetching from ZK as an initial de-dup.  Then map content hash -> live 
objects as a second layer de-dup.  That way, even if you have differently named 
configs with the same content, they can share.  Or if you have a no-op change 
to a configset, or perhaps even you change a configset and then revert the 
change prior to reloading a core.

> Share and reuse config set in a node
> 
>
> Key: SOLR-9273
> URL: https://issues.apache.org/jira/browse/SOLR-9273
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis, SolrCloud
>Reporter: Shalin Shekhar Mangar
> Fix For: 6.2, master (7.0)
>
>
> Currently, each core in a node ends up creating a completely new instance of 
> ConfigSet with its own schema, solrconfig and other properties. This is 
> wasteful when you have a lot of replicas in the same node with many of them 
> referring to the same config set in Zookeeper.
> There are many issues that need to be addressed for this to work so this is a 
> parent issue to track the work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9281) locate cores to host in zk based on nodeName

2016-07-06 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9281?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364866#comment-15364866
 ] 

Keith Laban commented on SOLR-9281:
---

Relates to [~markrmil...@gmail.com] comment in 
https://issues.apache.org/jira/browse/SOLR-7248?focusedCommentId=14363441=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14363441

> locate cores to host in zk based on nodeName
> 
>
> Key: SOLR-9281
> URL: https://issues.apache.org/jira/browse/SOLR-9281
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
>
> when starting up an instance of solr in addition to discovering cores on the 
> local filesystem should discover its cores in zk based on its node name



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8858) SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field Loading is Enabled

2016-07-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364864#comment-15364864
 ] 

ASF GitHub Bot commented on SOLR-8858:
--

Github user maedhroz commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/47#discussion_r69787760
  
--- Diff: solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java 
---
@@ -766,12 +766,16 @@ public Document doc(int i, Set fields) throws 
IOException {
 }
 
 final DirectoryReader reader = getIndexReader();
-if (!enableLazyFieldLoading || fields == null) {
-  d = reader.document(i);
+if (fields != null) {
+  if (enableLazyFieldLoading) {
+final SetNonLazyFieldSelector visitor = new 
SetNonLazyFieldSelector(fields, reader, i);
+reader.document(i, visitor);
+d = visitor.doc;
+  } else {
+d = reader.document(i, fields);
--- End diff --

Right. I *think* I addressed that in the last commit, since only full 
documents are cached now. The problem is that the overhead of doing this might 
be unacceptable.


> SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field 
> Loading is Enabled
> -
>
> Key: SOLR-8858
> URL: https://issues.apache.org/jira/browse/SOLR-8858
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6, 4.10, 5.5
>Reporter: Caleb Rackliffe
>  Labels: easyfix
>
> If {{enableLazyFieldLoading=false}}, a perfectly valid fields filter will be 
> ignored, and we'll create a {{DocumentStoredFieldVisitor}} without it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #47: SOLR-8858 SolrIndexSearcher#doc() Completely I...

2016-07-06 Thread maedhroz
Github user maedhroz commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/47#discussion_r69787760
  
--- Diff: solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java 
---
@@ -766,12 +766,16 @@ public Document doc(int i, Set fields) throws 
IOException {
 }
 
 final DirectoryReader reader = getIndexReader();
-if (!enableLazyFieldLoading || fields == null) {
-  d = reader.document(i);
+if (fields != null) {
+  if (enableLazyFieldLoading) {
+final SetNonLazyFieldSelector visitor = new 
SetNonLazyFieldSelector(fields, reader, i);
+reader.document(i, visitor);
+d = visitor.doc;
+  } else {
+d = reader.document(i, fields);
--- End diff --

Right. I *think* I addressed that in the last commit, since only full 
documents are cached now. The problem is that the overhead of doing this might 
be unacceptable.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 698 - Still Failing!

2016-07-06 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/698/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestLocalFSCloudBackupRestore.test

Error Message:
Error from server at http://127.0.0.1:35144/solr: 'location' is not specified 
as a query parameter or as a default repository property or as a cluster 
property.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:35144/solr: 'location' is not specified as a 
query parameter or as a default repository property or as a cluster property.
at 
__randomizedtesting.SeedInfo.seed([9E2A062278B0F7BA:167E39F8D64C9A42]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:606)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:366)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1228)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:166)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.testInvalidPath(AbstractCloudBackupRestoreTestCase.java:149)
at 
org.apache.solr.cloud.AbstractCloudBackupRestoreTestCase.test(AbstractCloudBackupRestoreTestCase.java:128)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-8858) SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field Loading is Enabled

2016-07-06 Thread Caleb Rackliffe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364857#comment-15364857
 ] 

Caleb Rackliffe commented on SOLR-8858:
---

bq. I am not very happy with this solution because from a Solr user's 
perspective, this feature adds no benefit but causes stored fields to be read 
twice for an uncached read? I must also admit that I do not have a good 
suggestion on how to avoid that.

[~shalinmangar] Unless there's a less invasive solution I'm overlooking, I 
think it might be best to abandon this issue as something to handle in Solr 
proper.

[~dsmiley] Our fork actually reads most stored fields from an embedded database 
and relies on the visitor's fields information to make decisions about when 
(and when not) to read stored fields from Solr itself. We don't actually use 
{{documentCache}} at all, so the fixes I made around the initial patch to get 
the unit tests passing won't even be necessary.

Let me know if there are any objections to closing this...

> SolrIndexSearcher#doc() Completely Ignores Field Filters Unless Lazy Field 
> Loading is Enabled
> -
>
> Key: SOLR-8858
> URL: https://issues.apache.org/jira/browse/SOLR-8858
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.6, 4.10, 5.5
>Reporter: Caleb Rackliffe
>  Labels: easyfix
>
> If {{enableLazyFieldLoading=false}}, a perfectly valid fields filter will be 
> ignored, and we'll create a {{DocumentStoredFieldVisitor}} without it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9285) ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with RTG on uncommitted doc

2016-07-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9285:
---
Description: 
Found in SOLR-9180 testing.

Even in single node solr envs, doing an RTG for an uncommitted doc that uses 
ValueSourceAugmenter (ie: simple field aliasing, or functions in fl) causes an 
ArrayIndexOutOfBoundsException

  was:
Found in SOLR-9180 testing.

Even in single node solr envs, doing an RTG for an uncommitted doc that uses 
ValueSourceAugmenter causes an ArrayIndexOutOfBoundsException


See also...

* TestPseudoReturnFields.testFunctionsRTG
* TestPseudoReturnFields.testFunctionsAndExplicitRTG

> ArrayIndexOutOfBoundsException when ValueSourceAugmenter used with RTG on 
> uncommitted doc
> -
>
> Key: SOLR-9285
> URL: https://issues.apache.org/jira/browse/SOLR-9285
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> Even in single node solr envs, doing an RTG for an uncommitted doc that uses 
> ValueSourceAugmenter (ie: simple field aliasing, or functions in fl) causes 
> an ArrayIndexOutOfBoundsException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9286) SolrCloud RTG: ValueSourceAugmenter silently fails (even for committed doc)

2016-07-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-9286:
---
Description: 
Found in SOLR-9180 testing.

when using RTG with ValueSourceAugmenter (ie: field aliasing or functions in 
the fl) in SolrCloud, the request can succeed w/o actually performing the field 
aliasing and/or ValueSourceAugmenter additions.

This is inconsistent with single-node solr installs (at least as far as 
committed docs go, see SOLR-9285 regarding uncommitted docs)

  was:
Found in SOLR-9180 testing.

when using RTG with field aliasing in SolrCloud, the request can succeed w/o 
actually performing the field aliasing and/or  ValueSourceAugmenter additions

Summary: SolrCloud RTG: ValueSourceAugmenter silently fails (even for 
committed doc)  (was: SolrCloud RTG:field aliasing silenly fails (even for 
committed doc))

See also...

* TestCloudPseudoReturnFields.testFunctionsRTG
* TestCloudPseudoReturnFields.testFunctionsAndExplicitRTG
* TestCloudPseudoReturnFields.testFunctionsAndScoreRTG

> SolrCloud RTG: ValueSourceAugmenter silently fails (even for committed doc)
> ---
>
> Key: SOLR-9286
> URL: https://issues.apache.org/jira/browse/SOLR-9286
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> when using RTG with ValueSourceAugmenter (ie: field aliasing or functions in 
> the fl) in SolrCloud, the request can succeed w/o actually performing the 
> field aliasing and/or ValueSourceAugmenter additions.
> This is inconsistent with single-node solr installs (at least as far as 
> committed docs go, see SOLR-9285 regarding uncommitted docs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9287) single node RTG: NPE if score is requested

2016-07-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15364795#comment-15364795
 ] 

Hoss Man commented on SOLR-9287:


See also...

* TestPseudoReturnFields.testScoreAndExplicitRealFieldsRTG
* TestPseudoReturnFields.testScoreAndAllRealFieldsRTG
* TestPseudoReturnFields.testGlobsAndScoreRTG
* TestPseudoReturnFields.testAugmentersAndScoreRTG 
* TestPseudoReturnFields.testAugmentersGlobsExplicitAndScoreOhMyRTG


> single node RTG: NPE if score is requested
> --
>
> Key: SOLR-9287
> URL: https://issues.apache.org/jira/browse/SOLR-9287
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Found in SOLR-9180 testing.
> In single node solr setups, if an RTG request is made that includes "score" 
> in the fl, then there is an NPE from ResultContext.wantsScores.
> This does *not* happen if the same request happens in a SolrCloud setup - in 
> that case the request for "score" is silently ignored -- this seems to me 
> like the optimal behavior  (similarly: using the {{\[explain\]}} transformer 
> in the fl for an RTG is currently silently ignored for both single node and 
> solr cloud envs)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >