[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 107 - Still Unstable

2018-07-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/107/

1 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
We think that split was successful but sub-shard states were not updated even 
after 2 minutes.

Stack Trace:
java.lang.AssertionError: We think that split was successful but sub-shard 
states were not updated even after 2 minutes.
at 
__randomizedtesting.SeedInfo.seed([37A21AF660067B4B:BC85C9272100D0CF]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:555)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
   

[jira] [Commented] (SOLR-12536) Enhance autoscaling policy to equally distribute replicas on the basis of arbitrary properties

2018-07-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559231#comment-16559231
 ] 

ASF subversion and git services commented on SOLR-12536:


Commit be53d6b18f653b3585e54f9245c1c797d9f1aade in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=be53d6b ]

SOLR-12536: ref guide typo fixed


> Enhance autoscaling policy to equally distribute replicas on the basis of 
> arbitrary properties
> --
>
> Key: SOLR-12536
> URL: https://issues.apache.org/jira/browse/SOLR-12536
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
>
> *example:1*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : "#EACH" }
> //if the ports are "8983", "7574", "7575", the above rule is equivalent to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : ["8983", "7574", 
> "7575"]}{code}
> *case 1*: {{numShards =2, replicationFactor=3}} . In this case all the nodes 
> are divided into 3 buckets each containing nodes in that port. Each bucket 
> must contain {{3 * 2 /3 =2}} replicas
>  
> *example : 2*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : "#EACH" }
> //if the zones are "east_1", "east_2", "west_1", the above rule is equivalent 
> to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : ["east_1", 
> "east_2", "west_1"]}{code}
> The behavior is similar to example 1 , just that in this case we apply it to 
> a system property



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12536) Enhance autoscaling policy to equally distribute replicas on the basis of arbitrary properties

2018-07-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559230#comment-16559230
 ] 

ASF subversion and git services commented on SOLR-12536:


Commit dfb18a6d7246ae7e68a241efc49188cfb4c07cc4 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dfb18a6 ]

SOLR-12536: ref guide typo fixed


> Enhance autoscaling policy to equally distribute replicas on the basis of 
> arbitrary properties
> --
>
> Key: SOLR-12536
> URL: https://issues.apache.org/jira/browse/SOLR-12536
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
>
> *example:1*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : "#EACH" }
> //if the ports are "8983", "7574", "7575", the above rule is equivalent to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : ["8983", "7574", 
> "7575"]}{code}
> *case 1*: {{numShards =2, replicationFactor=3}} . In this case all the nodes 
> are divided into 3 buckets each containing nodes in that port. Each bucket 
> must contain {{3 * 2 /3 =2}} replicas
>  
> *example : 2*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : "#EACH" }
> //if the zones are "east_1", "east_2", "west_1", the above rule is equivalent 
> to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : ["east_1", 
> "east_2", "west_1"]}{code}
> The behavior is similar to example 1 , just that in this case we apply it to 
> a system property



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 2424 - Unstable!

2018-07-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2424/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild

Error Message:
junit.framework.AssertionFailedError: Unexpected wrapped exception type, 
expected CoreIsClosedException

Stack Trace:
java.util.concurrent.ExecutionException: junit.framework.AssertionFailedError: 
Unexpected wrapped exception type, expected CoreIsClosedException
at 
__randomizedtesting.SeedInfo.seed([BC833D0B7865F353:630E5FB4460CA631]:0)
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:191)
at 
org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild(InfixSuggestersTest.java:130)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: junit.framework.AssertionFailedError: Unexpected wrapped 

[jira] [Updated] (SOLR-12589) metrics name should not contains "/" path seperator while using SolrGangliaReporter

2018-07-26 Thread weizhenyuan (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weizhenyuan updated SOLR-12589:
---
Attachment: SOLR-12589-3.diff

> metrics name should not contains "/" path seperator while using 
> SolrGangliaReporter
> ---
>
> Key: SOLR-12589
> URL: https://issues.apache.org/jira/browse/SOLR-12589
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3.1, 7.4, master (8.0)
>Reporter: weizhenyuan
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: SOLR-12589-1.diff, SOLR-12589-2.diff, SOLR-12589-3.diff
>
>
> As I was using  SolrGangliaReporter,  default metrics names will contains 
> "/", which is 
> offen used in FileSystem as an seperator, than it encounters  ERROR cause 
> creating 
> metrics data file failed,  bellow is the Exception in  /var/log/messages:
> Jul 25 15:01:37 hb-bp1tg6t003y04p201-001 gmetad: Unable to write meta data 
> for metric 
> solr.node.QUERY.httpShardHandler.http_//hb-bp1tg6t003y04p201-003.hbase.rds.aliyuncs.com_8983/solr/admin/cores.post.requests.p999
>  to RRD
> Jul 25 15:01:37 hb-bp1tg6t003y04p201-001 gmetad: RRD_create: creating 
> '/var/lib/ganglia/rrds/__SummaryInfo__/solr.node.ADMIN./admin/cores.requestTimes.p999.rrd':
>  No such file or directory
> Jul 25 15:01:37 hb-bp1tg6t003y04p201-001 gmetad: Unable to write meta data 
> for metric solr.node.ADMIN./admin/cores.requestTimes.p999 to RRD
> Jul 25 15:01:37 localhost gmetad: RRD_create: creating 
> '/var/lib/ganglia/rrds/__SummaryInfo__/solr.node.ADMIN./admin/collections.requestTimes.min.rrd':
>  No such file or directory
> Jul 25 15:01:37 localhost gmetad: Unable to write meta data for metric 
> solr.node.ADMIN./admin/collections.requestTimes.min to RRD
> Jul 25 15:01:37 localhost gmetad: RRD_create: creating 
> '/var/lib/ganglia/rrds/__SummaryInfo__/solr.node.ADMIN./admin/authorization.requestTimes.stddev.rrd':
>  No such file or directory
> Jul 25 15:01:37 localhost gmetad: Unable to write meta data for metric 
> solr.node.ADMIN./admin/authorization.requestTimes.stddev to RRD
> Jul 25 15:01:37 localhost gmetad: RRD_create: creating 
> '/var/lib/ganglia/rrds/__SummaryInfo__/solr.node.ADMIN./admin/autoscaling/history.requestTimes.p99.rrd':
>  No such file or directory
> Jul 25 15:01:37 localhost gmetad: Unable to write meta data for metric 
> solr.node.ADMIN./admin/autoscaling/history.requestTimes.p99 to RRD
> Jul 25 15:01:37 localhost gmetad: RRD_create: creating 
> '/var/lib/ganglia/rrds/__SummaryInfo__/solr.node.QUERY./admin/autoscaling.timeouts.m1_rate.rrd':
>  No such file or directory
> I think the metrics name should be normalized for different  reporter,such as 
> SolrGangliaReport.
> Some times other normalization is used for other reason,but now,for 
> ganglia,It must
> replace the "/" .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12589) metrics name should not contains "/" path seperator while using SolrGangliaReporter

2018-07-26 Thread weizhenyuan (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

weizhenyuan updated SOLR-12589:
---
Attachment: SOLR-12589-2.diff

> metrics name should not contains "/" path seperator while using 
> SolrGangliaReporter
> ---
>
> Key: SOLR-12589
> URL: https://issues.apache.org/jira/browse/SOLR-12589
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.3.1, 7.4, master (8.0)
>Reporter: weizhenyuan
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: SOLR-12589-1.diff, SOLR-12589-2.diff
>
>
> As I was using  SolrGangliaReporter,  default metrics names will contains 
> "/", which is 
> offen used in FileSystem as an seperator, than it encounters  ERROR cause 
> creating 
> metrics data file failed,  bellow is the Exception in  /var/log/messages:
> Jul 25 15:01:37 hb-bp1tg6t003y04p201-001 gmetad: Unable to write meta data 
> for metric 
> solr.node.QUERY.httpShardHandler.http_//hb-bp1tg6t003y04p201-003.hbase.rds.aliyuncs.com_8983/solr/admin/cores.post.requests.p999
>  to RRD
> Jul 25 15:01:37 hb-bp1tg6t003y04p201-001 gmetad: RRD_create: creating 
> '/var/lib/ganglia/rrds/__SummaryInfo__/solr.node.ADMIN./admin/cores.requestTimes.p999.rrd':
>  No such file or directory
> Jul 25 15:01:37 hb-bp1tg6t003y04p201-001 gmetad: Unable to write meta data 
> for metric solr.node.ADMIN./admin/cores.requestTimes.p999 to RRD
> Jul 25 15:01:37 localhost gmetad: RRD_create: creating 
> '/var/lib/ganglia/rrds/__SummaryInfo__/solr.node.ADMIN./admin/collections.requestTimes.min.rrd':
>  No such file or directory
> Jul 25 15:01:37 localhost gmetad: Unable to write meta data for metric 
> solr.node.ADMIN./admin/collections.requestTimes.min to RRD
> Jul 25 15:01:37 localhost gmetad: RRD_create: creating 
> '/var/lib/ganglia/rrds/__SummaryInfo__/solr.node.ADMIN./admin/authorization.requestTimes.stddev.rrd':
>  No such file or directory
> Jul 25 15:01:37 localhost gmetad: Unable to write meta data for metric 
> solr.node.ADMIN./admin/authorization.requestTimes.stddev to RRD
> Jul 25 15:01:37 localhost gmetad: RRD_create: creating 
> '/var/lib/ganglia/rrds/__SummaryInfo__/solr.node.ADMIN./admin/autoscaling/history.requestTimes.p99.rrd':
>  No such file or directory
> Jul 25 15:01:37 localhost gmetad: Unable to write meta data for metric 
> solr.node.ADMIN./admin/autoscaling/history.requestTimes.p99 to RRD
> Jul 25 15:01:37 localhost gmetad: RRD_create: creating 
> '/var/lib/ganglia/rrds/__SummaryInfo__/solr.node.QUERY./admin/autoscaling.timeouts.m1_rate.rrd':
>  No such file or directory
> I think the metrics name should be normalized for different  reporter,such as 
> SolrGangliaReport.
> Some times other normalization is used for other reason,but now,for 
> ganglia,It must
> replace the "/" .



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_172) - Build # 7449 - Still Unstable!

2018-07-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7449/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseSerialGC

5 tests failed.
FAILED:  
org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild

Error Message:
junit.framework.AssertionFailedError: Unexpected wrapped exception type, 
expected CoreIsClosedException

Stack Trace:
java.util.concurrent.ExecutionException: junit.framework.AssertionFailedError: 
Unexpected wrapped exception type, expected CoreIsClosedException
at 
__randomizedtesting.SeedInfo.seed([5F789CF101198E5E:80F5FE4E3F70DB3C]:0)
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild(InfixSuggestersTest.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: junit.framework.AssertionFailedError: Unexpected wrapped 

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 112 - Still Unstable

2018-07-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/112/

5 tests failed.
FAILED:  org.apache.solr.cloud.api.collections.ShardSplitTest.test

Error Message:
Wrong doc count on shard1_1. See SOLR-5309 expected:<88> but was:<89>

Stack Trace:
java.lang.AssertionError: Wrong doc count on shard1_1. See SOLR-5309 
expected:<88> but was:<89>
at 
__randomizedtesting.SeedInfo.seed([C20C9F28D09D1448:4A58A0F27E6179B0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:969)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:751)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.test(ShardSplitTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Resolved] (SOLR-12536) Enhance autoscaling policy to equally distribute replicas on the basis of arbitrary properties

2018-07-26 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-12536.
---
   Resolution: Fixed
Fix Version/s: 7.5
   master (8.0)

> Enhance autoscaling policy to equally distribute replicas on the basis of 
> arbitrary properties
> --
>
> Key: SOLR-12536
> URL: https://issues.apache.org/jira/browse/SOLR-12536
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
>
> *example:1*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : "#EACH" }
> //if the ports are "8983", "7574", "7575", the above rule is equivalent to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : ["8983", "7574", 
> "7575"]}{code}
> *case 1*: {{numShards =2, replicationFactor=3}} . In this case all the nodes 
> are divided into 3 buckets each containing nodes in that port. Each bucket 
> must contain {{3 * 2 /3 =2}} replicas
>  
> *example : 2*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : "#EACH" }
> //if the zones are "east_1", "east_2", "west_1", the above rule is equivalent 
> to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : ["east_1", 
> "east_2", "west_1"]}{code}
> The behavior is similar to example 1 , just that in this case we apply it to 
> a system property



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12536) Enhance autoscaling policy to equally distribute replicas on the basis of arbitrary properties

2018-07-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559167#comment-16559167
 ] 

ASF subversion and git services commented on SOLR-12536:


Commit 90424cbe271a4eab174b2897999ccfbf4bc149df in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=90424cbe ]

SOLR-12536: ref guide


> Enhance autoscaling policy to equally distribute replicas on the basis of 
> arbitrary properties
> --
>
> Key: SOLR-12536
> URL: https://issues.apache.org/jira/browse/SOLR-12536
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> *example:1*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : "#EACH" }
> //if the ports are "8983", "7574", "7575", the above rule is equivalent to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : ["8983", "7574", 
> "7575"]}{code}
> *case 1*: {{numShards =2, replicationFactor=3}} . In this case all the nodes 
> are divided into 3 buckets each containing nodes in that port. Each bucket 
> must contain {{3 * 2 /3 =2}} replicas
>  
> *example : 2*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : "#EACH" }
> //if the zones are "east_1", "east_2", "west_1", the above rule is equivalent 
> to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : ["east_1", 
> "east_2", "west_1"]}{code}
> The behavior is similar to example 1 , just that in this case we apply it to 
> a system property



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12536) Enhance autoscaling policy to equally distribute replicas on the basis of arbitrary properties

2018-07-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559166#comment-16559166
 ] 

ASF subversion and git services commented on SOLR-12536:


Commit e492926a44c9335cb3c03adf3e06a4e42e3d072a in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e492926 ]

SOLR-12536: ref guide


> Enhance autoscaling policy to equally distribute replicas on the basis of 
> arbitrary properties
> --
>
> Key: SOLR-12536
> URL: https://issues.apache.org/jira/browse/SOLR-12536
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> *example:1*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : "#EACH" }
> //if the ports are "8983", "7574", "7575", the above rule is equivalent to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : ["8983", "7574", 
> "7575"]}{code}
> *case 1*: {{numShards =2, replicationFactor=3}} . In this case all the nodes 
> are divided into 3 buckets each containing nodes in that port. Each bucket 
> must contain {{3 * 2 /3 =2}} replicas
>  
> *example : 2*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : "#EACH" }
> //if the zones are "east_1", "east_2", "west_1", the above rule is equivalent 
> to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : ["east_1", 
> "east_2", "west_1"]}{code}
> The behavior is similar to example 1 , just that in this case we apply it to 
> a system property



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Synonyms + autoGeneratePhraseQueries

2018-07-26 Thread Michael Sokolov
Did you mean q=oow in your example? As written, I don't see how there is a
problem.

On Thu, Jul 26, 2018 at 8:41 AM Andrea Gazzarini 
wrote:

> Hi, still fighting with synonyms, I have another question.
> I'm not understanding the role, and the effect, of the
> "autoGeneratePhraseQueries" attribute in a synonym context.
> I mean, if I have the following field type:
>
>  autoGeneratePhraseQueries="true">
>
>
>
>
>
>
>
> synonyms="synonyms.txt" ignoreCase="false" expand="true"/>
>
>
> with the following synonym: *out of warranty,oow*
>
> with the following query: *q=out of warranty*
>
> The output query is exactly what I would expect: *(title:oow
> PhraseQuery(title:"out of warranty"))*
>
> Setting the autoGeneratePhraseQueries to *false* (or better, forgetting
> the attribute declaration at all), the output query is:
>
> *(title:oow (+title:out +title:of +title:warranty))*
> Which matches things like "I had to step out for renewing the warranty of
> my device".
>
> This, at first glance sounds to me completely wrong. Or, better, I'm not
> able to imagine a use case where that synonym decomposition could be
> useful. Is that wanted? I would say that the query parser should always
> generates a phrase query for multi-term synonyms, like in the first example
> (i.e. autoGeneratePhraseQueries=true).
>
> Thanks in advance,
> Andrea
>


Re: SynonymGraphFilter followed by StopFilter

2018-07-26 Thread Michael Sokolov
 > In general I’d avoid index-time synonyms in lucene because synonyms can
create graphs (eg if a single term gets expanded to several terms), and we
can’t index graphs correctly.

I wonder what it would take to address this. I guess the blast radius of
adding a token "width" could be pretty large. Is there an issue or any past
discussion about that?

On Thu, Jul 26, 2018 at 11:42 AM Andrea Gazzarini 
wrote:

> Hi Walter,
> many thanks for the response and without any constraint at all, I would
> agree with you. From your message I clearly understand your experience is
> greater than mine. My 2 cents inline below:
>
> > Move the synonym filter to the index analyzer chain. That provides
> better performance and avoids some surprising relevance behavior. With
> synonyms at query time, you’ll see different idf for terms in the synonym
> set, with the rare variant scoring higher. That is probably the opposite of
> what is expected.
>
> Unfortunately moving the synonym filter to the index analyzer is not an
> option: the project where I'm working on has a huge index and the synonyms
> list is something that (at least in this stage) frequently changes;
> re-index everything from scratch each time a change occurs is a big
> problem. On the other side, the IDF issue you mention doesn't produce so
> many unwanted effect, at least until now. But I got the point, thanks for
> the hint.
>
> > Also, phrase synonyms just don’t work at query time because the terms
> are parsed into individual tokens by the query parser, not the tokenizer.
> Here I dont' get you: using the SynonymGraph Filter + SplitOnWhiteSpace =
> false + AutoGeneratePhraseQueries I get the synonym phrasing correctly
> working (see the first example in my email).
>
> > Don’t use stop words. Just remove that line. Removing stop words is a
> performance and space hack that was useful in the 1960’s, but causes
> problems now. I’ve never used stop word removal and I started in search
> with Infoseek in 1996. Stop word removal is like a binary idf, ignoring
> common words. Since we have idf, we can give a lower score to common words
> and keep them in the index.
>
> And this is, as I see, something which animated long discussions around
> using / avoiding stopwords. I will check your suggestion, what it means to
> apply that approach to my project, but in meantime I think, also looking at
> the JIRA Alan pointed in his answer, the issue is there, and it's real; I
> mean, it is something that it doesn't work as expected (my use case, as far
> as I understand, is just an example because the thing is broader and it is
> related to the FilteredTokenFilter)
>
> Thanks again,
> Andrea
>
> On 26/07/18 16:59, Walter Underwood wrote:
>
> Move the synonym filter to the index analyzer chain. That provides better
> performance and avoids some surprising relevance behavior. With synonyms at
> query time, you’ll see different idf for terms in the synonym set, with the
> rare variant scoring higher. That is probably the opposite of what is
> expected.
>
> Also, phrase synonyms just don’t work at query time because the terms are
> parsed into individual tokens by the query parser, not the tokenizer.
>
> Don’t use stop words. Just remove that line. Removing stop words is a
> performance and space hack that was useful in the 1960’s, but causes
> problems now. I’ve never used stop word removal and I started in search
> with Infoseek in 1996. Stop word removal is like a binary idf, ignoring
> common words. Since we have idf, we can give a lower score to common words
> and keep them in the index.
>
> Do those two things and it should work as you expect.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
> On Jul 26, 2018, at 3:23 AM, Andrea Gazzarini 
> wrote:
>
> Hi Alan, thanks for the response and thank you very much for the pointers
>
> On 26/07/18 12:16, Alan Woodward wrote:
>
> Hi Andrea,
>
> This is a long-standing issue: see
> https://issues.apache.org/jira/browse/LUCENE-4065 and
> https://issues.apache.org/jira/browse/LUCENE-8250 for discussion.  I
> don’t think we’ve reached a consensus on how to fix it yet, but more
> examples are good.
>
> Unfortunately I don’t think changing the StopFilter to ignore SYNONYM
> tokens will work, because then you’ll generate queries that always fail -
> they’ll search for ‘of’ in the middle of the phrase, but ‘of’ never gets
> indexed because it’s removed by the StopFilter at index time.
>
> - Alan
>
> On 26 Jul 2018, at 08:04, Andrea Gazzarini  wrote:
>
> Hi,
> I have the following field type definition:
>
>  autoGeneratePhraseQueries="true">
> 
> 
> 
> 
> 
> 
> 
>  synonyms="synonyms.txt" ignoreCase="false" expand="true"/>
>  ignoreCase="false"/>
> 
>
> Where synonyms and stopwords are defined as follows:
>
> synonyms = out of warranty,oow
> stopwords = of
>
> Running the following query:
>
> q=my tv went out *of* 

[jira] [Commented] (SOLR-12573) Config and using SolrGangliaReporter, encounters an ClassNotDefException

2018-07-26 Thread weizhenyuan (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12573?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16559154#comment-16559154
 ] 

weizhenyuan commented on SOLR-12573:


[~jpountz]  Any suggestion with this patch? I am using lucene-solr project to 
let more people use it easily in our company, and it's very 
necessary to enable SolrGangliaReporter.

> Config and using SolrGangliaReporter, encounters an ClassNotDefException
> 
>
> Key: SOLR-12573
> URL: https://issues.apache.org/jira/browse/SOLR-12573
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Affects Versions: 7.3.1, 7.4, master (8.0)
>Reporter: weizhenyuan
>Priority: Minor
>  Labels: build, patch
> Fix For: master (8.0)
>
> Attachments: ExceptionDetail.log, SOLR-12573-1.patch
>
>
> Config and using SolrGangliaReporter, then start solr service encounters the 
> ClassNotDefException bellow:
> java.lang.NoClassDefFoundError: org/acplt/oncrpc/XdrEncodingStream
> at info.ganglia.gmetric4j.gmetric.GMetric.(GMetric.java:82)
> at info.ganglia.gmetric4j.gmetric.GMetric.(GMetric.java:58)
> at info.ganglia.gmetric4j.gmetric.GMetric.(GMetric.java:40)
> at 
> org.apache.solr.metrics.reporters.SolrGangliaReporter.lambda$start$0(SolrGangliaReporter.java:106)
> at 
> org.apache.solr.metrics.reporters.ReporterClientCache.getOrCreate(ReporterClientCache.java:59)
> at 
> org.apache.solr.metrics.reporters.SolrGangliaReporter.start(SolrGangliaReporter.java:106)
> at
> ..
> Caused by: java.lang.ClassNotFoundException: 
> org.acplt.oncrpc.XdrEncodingStream
> at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:448)
> at java.lang.ClassLoader.loadClass(ClassLoader.java:380)
> It should be configurated an dependency in  solr/server/ivy.xml as default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 2423 - Failure!

2018-07-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2423/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

All tests passed

Build Log:
[...truncated 1875 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/core/test/temp/junit4-J2-20180727_16_27915998939265006527805.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 3 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/core/test/temp/junit4-J0-20180727_16_27911445238865616674831.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/core/test/temp/junit4-J1-20180727_16_27916901856123114652085.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 275 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/test-framework/test/temp/junit4-J0-20180727_000700_5253542569145113932212.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 8 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/test-framework/test/temp/junit4-J1-20180727_000700_5251978387896858077998.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 18 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/test-framework/test/temp/junit4-J2-20180727_000700_52510264387744550871437.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 1080 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/analysis/common/test/temp/junit4-J0-20180727_000843_70512957636568364095854.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J0: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/analysis/common/test/temp/junit4-J2-20180727_000843_70911577442567325161514.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J2: EOF 

[...truncated 3 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/analysis/common/test/temp/junit4-J1-20180727_000843_70616542198094216253600.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 252 lines...]
   [junit4] JVM J1: stderr was not empty, see: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/lucene/build/analysis/icu/test/temp/junit4-J1-20180727_001027_9096910330185777150357.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: Option 
UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in 
a future release.
   [junit4] <<< JVM J1: EOF 

[...truncated 3 lines...]
   [junit4] JVM J2: stderr was not empty, see: 

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 748 - Unstable!

2018-07-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/748/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild

Error Message:
junit.framework.AssertionFailedError: Unexpected wrapped exception type, 
expected CoreIsClosedException

Stack Trace:
java.util.concurrent.ExecutionException: junit.framework.AssertionFailedError: 
Unexpected wrapped exception type, expected CoreIsClosedException
at 
__randomizedtesting.SeedInfo.seed([8AE840E0AD933A5E:5565225F93FA6F3C]:0)
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild(InfixSuggestersTest.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: junit.framework.AssertionFailedError: Unexpected 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 701 - Still Unstable

2018-07-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/701/

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 3 object(s) that were not released!!! 
[MockDirectoryWrapper, InternalHttpClient, SolrCore] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:95)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:768)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:960)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:869)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1135)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:681)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:319)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:328)
  at 
org.apache.solr.handler.IndexFetcher.createHttpClient(IndexFetcher.java:226)  
at org.apache.solr.handler.IndexFetcher.(IndexFetcher.java:268)  at 
org.apache.solr.handler.ReplicationHandler.inform(ReplicationHandler.java:1187) 
 at org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:696) 
 at org.apache.solr.core.SolrCore.(SolrCore.java:993)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:869)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1135)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:681)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1045)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:869)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1135)
  at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:681)  
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 3 object(s) that were not 
released!!! [MockDirectoryWrapper, InternalHttpClient, SolrCore]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:95)
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:768)
at org.apache.solr.core.SolrCore.(SolrCore.java:960)
at org.apache.solr.core.SolrCore.(SolrCore.java:869)
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1135)
at 
org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:681)
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 22533 - Unstable!

2018-07-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22533/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild

Error Message:
junit.framework.AssertionFailedError: Unexpected wrapped exception type, 
expected CoreIsClosedException

Stack Trace:
java.util.concurrent.ExecutionException: junit.framework.AssertionFailedError: 
Unexpected wrapped exception type, expected CoreIsClosedException
at 
__randomizedtesting.SeedInfo.seed([780EC9A5C7052005:A783AB1AF96C7567]:0)
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild(InfixSuggestersTest.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: junit.framework.AssertionFailedError: 

[JENKINS] Lucene-Solr-repro - Build # 1048 - Unstable

2018-07-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1048/

[...truncated 40 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/700/consoleText

[repro] Revision: 950b7b6b1b92849721eaed50ecad9711199180e8

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testNodeLostTriggerRestoreState -Dtests.seed=1C03E7D0BA1D49FA 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=el-CY 
-Dtests.timezone=Asia/Kolkata -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=testFailedMove -Dtests.seed=1C03E7D0BA1D49FA 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=es-CL 
-Dtests.timezone=Asia/Dacca -Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestCloudCollectionsListeners 
-Dtests.method=testWatchesWorkForBothStateFormats -Dtests.seed=804560C5F2E9C69C 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sl 
-Dtests.timezone=America/Havana -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestCloudCollectionsListeners 
-Dtests.method=testCollectionDeletion -Dtests.seed=804560C5F2E9C69C 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sl 
-Dtests.timezone=America/Havana -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
e2b08a4d473e68ca5f1b868cc55f550585221be7
[repro] git fetch
[repro] git checkout 950b7b6b1b92849721eaed50ecad9711199180e8

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   TestCloudCollectionsListeners
[repro]solr/core
[repro]   TestTriggerIntegration
[repro]   MoveReplicaHDFSTest
[repro] ant compile-test

[...truncated 2481 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestCloudCollectionsListeners" -Dtests.showOutput=onerror  
-Dtests.seed=804560C5F2E9C69C -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=sl -Dtests.timezone=America/Havana -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 78 lines...]
[repro] ant compile-test

[...truncated 1334 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.TestTriggerIntegration|*.MoveReplicaHDFSTest" 
-Dtests.showOutput=onerror  -Dtests.seed=1C03E7D0BA1D49FA -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=el-CY -Dtests.timezone=Asia/Kolkata 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 9081 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.MoveReplicaHDFSTest
[repro]   0/5 failed: org.apache.solr.common.cloud.TestCloudCollectionsListeners
[repro]   4/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro] git checkout e2b08a4d473e68ca5f1b868cc55f550585221be7

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-10299) Provide search for online Ref Guide

2018-07-26 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10299?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558932#comment-16558932
 ] 

Mikhail Khludnev commented on SOLR-10299:
-

Here is a simple prototype 
http://people.apache.org/~mkhl/searchable-solr-guide-7-3/ Feedback is much 
appreciated. 

> Provide search for online Ref Guide
> ---
>
> Key: SOLR-10299
> URL: https://issues.apache.org/jira/browse/SOLR-10299
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Priority: Major
> Attachments: basic-services-diagram.png
>
>
> The POC to move the Ref Guide off Confluence did not address providing 
> full-text search of the page content. Not because it's hard or impossible, 
> but because there were plenty of other issues to work on.
> The current HTML page design provides a title index, but to replicate the 
> current Confluence experience, the online version(s) need to provide a 
> full-text search experience.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7314) Graduate LatLonPoint to core

2018-07-26 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558854#comment-16558854
 ] 

Adrien Grand commented on LUCENE-7314:
--

[~nknize] Shall we resolve this issue now?

> Graduate LatLonPoint to core
> 
>
> Key: LUCENE-7314
> URL: https://issues.apache.org/jira/browse/LUCENE-7314
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7314.patch, LUCENE-7314.patch
>
>
> Maybe we should graduate these fields (and related queries) to core for 
> Lucene 6.1?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1044 - Unstable

2018-07-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1044/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/111/consoleText

[repro] Revision: cf9c3c11a28deff188f4edb5ee5cdd0637cdb958

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest 
-Dtests.method=testSplitWithChaosMonkey -Dtests.seed=7E8A73E785BEAAD1 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=th-TH-u-nu-thai-x-lvariant-TH 
-Dtests.timezone=Atlantic/Reykjavik -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestRecovery 
-Dtests.method=testExistOldBufferLog -Dtests.seed=7E8A73E785BEAAD1 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hr 
-Dtests.timezone=MST -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HttpPartitionTest -Dtests.method=test 
-Dtests.seed=7E8A73E785BEAAD1 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=de-DE -Dtests.timezone=America/Chicago 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HttpPartitionTest 
-Dtests.seed=7E8A73E785BEAAD1 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=de-DE -Dtests.timezone=America/Chicago 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
d87ea6b1ccd28e0dd8e30565fe95b2e0a31f82e8
[repro] git fetch
[repro] git checkout cf9c3c11a28deff188f4edb5ee5cdd0637cdb958

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestRecovery
[repro]   HttpPartitionTest
[repro]   ShardSplitTest
[repro] ant compile-test

[...truncated 3335 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.TestRecovery|*.HttpPartitionTest|*.ShardSplitTest" 
-Dtests.showOutput=onerror  -Dtests.seed=7E8A73E785BEAAD1 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hr -Dtests.timezone=MST 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 129206 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.HttpPartitionTest
[repro]   3/5 failed: org.apache.solr.search.TestRecovery
[repro]   5/5 failed: org.apache.solr.cloud.api.collections.ShardSplitTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] git fetch

[...truncated 3 lines...]
[repro] git checkout branch_7x

[...truncated 3 lines...]
[repro] git merge --ff-only

[...truncated 49 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ShardSplitTest
[repro] ant compile-test

[...truncated  lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.ShardSplitTest" -Dtests.showOutput=onerror  
-Dtests.seed=7E8A73E785BEAAD1 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=th-TH-u-nu-thai-x-lvariant-TH 
-Dtests.timezone=Atlantic/Reykjavik -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 119857 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   5/5 failed: org.apache.solr.cloud.api.collections.ShardSplitTest

[repro] Re-testing 100% failures at the tip of branch_7x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ShardSplitTest
[repro] ant compile-test

[...truncated  lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.ShardSplitTest" -Dtests.showOutput=onerror  
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=th-TH-u-nu-thai-x-lvariant-TH 
-Dtests.timezone=Atlantic/Reykjavik -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 128298 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x without a seed:
[repro]   5/5 failed: org.apache.solr.cloud.api.collections.ShardSplitTest
[repro] git checkout d87ea6b1ccd28e0dd8e30565fe95b2e0a31f82e8

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-12587) Reuse Lucene's PriorityQueue for the ExportHandler

2018-07-26 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12587:
-
Attachment: SOLR-12587.patch

> Reuse Lucene's PriorityQueue for the ExportHandler
> --
>
> Key: SOLR-12587
> URL: https://issues.apache.org/jira/browse/SOLR-12587
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>  Labels: export-writer
> Attachments: SOLR-12587.patch, SOLR-12587.patch
>
>
> We have a priority queue in Lucene  {{org.apache.lucene.utilPriorityQueue}} . 
> The Export Handler also implements a PriorityQueue 
> {{org.apache.solr.handler.export.PriorityQueue}} . Both are obviously very 
> similar with minor API differences. 
>  
> The aim here is to reuse Lucene's PQ and remove the Solr implementation. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12587) Reuse Lucene's PriorityQueue for the ExportHandler

2018-07-26 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558743#comment-16558743
 ] 

Varun Thacker commented on SOLR-12587:
--

Updated patch which will apply cleanly once LUCENE-8428 has been committed

> Reuse Lucene's PriorityQueue for the ExportHandler
> --
>
> Key: SOLR-12587
> URL: https://issues.apache.org/jira/browse/SOLR-12587
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>  Labels: export-writer
> Attachments: SOLR-12587.patch, SOLR-12587.patch
>
>
> We have a priority queue in Lucene  {{org.apache.lucene.utilPriorityQueue}} . 
> The Export Handler also implements a PriorityQueue 
> {{org.apache.solr.handler.export.PriorityQueue}} . Both are obviously very 
> similar with minor API differences. 
>  
> The aim here is to reuse Lucene's PQ and remove the Solr implementation. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8060) Enable top-docs collection optimizations by default

2018-07-26 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558729#comment-16558729
 ] 

Hoss Man commented on LUCENE-8060:
--

{quote}Based on your comments I am getting the feeling that you are leaning 
towards to exposing this configuration option, having a sensible default and 
pointing users to creating collectors manually if they have more specific 
needs, do I get it right?
{quote}
I dunno ... i like your TotalHits proposal in LUCENE-8430, i like that if new 
users see that object they can read the docs and see that sometimes they might 
not get accurate counts, and that class can have javadoc links to ways they can 
ensure a higher threshold (or an unlimited threshold to force exact counts) ... 
i'm just not sure i like the idea of the TotalHits javadocs needing to link to 
two differnet ways of achieving the samething: an IndexSearcher config option 
to change the "defaults" _and_ a TopFieldCollector builder method that takes in 
a value ... seems clunky to me...

But to be clear: i don't have super strong feelings about the clunkiness. Happy 
to defer to you on this.  Just wanted to point out (in my last comment) why it 
felt weird to me.

> Enable top-docs collection optimizations by default
> ---
>
> Key: LUCENE-8060
> URL: https://issues.apache.org/jira/browse/LUCENE-8060
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
>
> We are getting optimizations when hit counts are not required (sorted 
> indexes, MAXSCORE, short-circuiting of phrase queries) but our users won't 
> benefit from them unless we disable exact hit counts by default or we require 
> them to tell us whether hit counts are required.
> I think making hit counts approximate by default is going to be a bit trappy, 
> so I'm rather leaning towards requiring users to tell us explicitly whether 
> they need total hit counts. I can think of two ways to do that: either by 
> passing a boolean to the IndexSearcher constructor or by adding a boolean to 
> all methods that produce TopDocs instances. I like the latter better but I'm 
> open to discussion or other ideas?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12596) DocValues page should explain terms like SORTED_SET, SORTED_NUMERIC

2018-07-26 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-12596:
-

 Summary: DocValues page should explain terms like SORTED_SET, 
SORTED_NUMERIC
 Key: SOLR-12596
 URL: https://issues.apache.org/jira/browse/SOLR-12596
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Erick Erickson
Assignee: Erick Erickson


docvalues.adoc

Unless you dive into the code, the difference between SORTED_SET, 
SORTED_NUMERIC etc aren't clear. The link to the javadocs is easy to overlook 
(or even know is important).

I'll update the Javadocs to include a very short explanation of each.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1594 - Still Failing

2018-07-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1594/

4 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:35087/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:35087/collection1
at 
__randomizedtesting.SeedInfo.seed([7E2DB4E41D56313B:F6798B3EB3AA5CC3]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.commit(AbstractFullDistribZkTestBase.java:1591)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:213)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-11-ea+23) - Build # 22531 - Still Unstable!

2018-07-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22531/
Java: 64bit/jdk-11-ea+23 -XX:+UseCompressedOops -XX:+UseG1GC

18 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([450F366B18698A59]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([450F366B18698A59]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Commented] (LUCENE-8060) Enable top-docs collection optimizations by default

2018-07-26 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558683#comment-16558683
 ] 

Adrien Grand commented on LUCENE-8060:
--

{quote}why have a setDefaultNumTotalHitsToTrack(int) just for this concept, and 
not a setter for all the other collector concepts that we currently have 
defaults for in the simple search/searchAfter methods (like Sort sort , boolean 
doDocScores , boolean doMaxScore , etc...)
{quote}
Actually some concepts like the similarity and the query cache policy are set 
as members of IndexSearcher, so this isn't really new? I think the assumption 
is that you most likely need the same values for most your requests and do not 
need it to be configurable on a per-request basis, unlike the sort or the 
number of hits to collect?
{quote}do we want to go down the route of an IndexSearcherConfig ?
{quote}
A user suggested adding this class last year: LUCENE-7902. I don't have a 
strong opinion on this one besides keeping a simple IndexSearcher ctor that 
only take a reader and has sensible defaults.
{quote}this seems like it introduces divergent "intermediate APIs" for users to 
learn about that might frustrate them down the road...
{quote}
This is a good point. I also dislike a bit adding new setters/configuration 
options if we can come up with a default value that is reasonable and should 
work for most users, at least as long as their use-case remains simple. I'm 
seeing pros and cons either way and I would probably be fine either way too.

Based on your comments I am getting the feeling that you are leaning towards to 
exposing this configuration option, having a sensible default and pointing 
users to creating collectors manually if they have more specific needs, do I 
get it right?

> Enable top-docs collection optimizations by default
> ---
>
> Key: LUCENE-8060
> URL: https://issues.apache.org/jira/browse/LUCENE-8060
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
>
> We are getting optimizations when hit counts are not required (sorted 
> indexes, MAXSCORE, short-circuiting of phrase queries) but our users won't 
> benefit from them unless we disable exact hit counts by default or we require 
> them to tell us whether hit counts are required.
> I think making hit counts approximate by default is going to be a bit trappy, 
> so I'm rather leaning towards requiring users to tell us explicitly whether 
> they need total hit counts. I can think of two ways to do that: either by 
> passing a boolean to the IndexSearcher constructor or by adding a boolean to 
> all methods that produce TopDocs instances. I like the latter better but I'm 
> open to discussion or other ideas?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10) - Build # 716 - Still Unstable!

2018-07-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/716/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseParallelGC

8 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at http://127.0.0.1:54031/solr/awhollynewcollection_0: No 
registered leader was found after waiting for 4000ms , collection: 
awhollynewcollection_0 slice: shard2 saw 
state=DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/9)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{   
"range":"8000-",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"awhollynewcollection_0_shard1_replica_n1", 
  "base_url":"http://127.0.0.1:54020/solr;,   
"node_name":"127.0.0.1:54020_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false"}, "core_node5":{  
 "core":"awhollynewcollection_0_shard1_replica_n2",   
"base_url":"http://127.0.0.1:54031/solr;,   
"node_name":"127.0.0.1:54031_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"leader":"true"}}}, "shard2":{   "range":"0-7fff",   
"state":"active",   "replicas":{ "core_node7":{   
"core":"awhollynewcollection_0_shard2_replica_n4",   
"base_url":"http://127.0.0.1:54022/solr;,   
"node_name":"127.0.0.1:54022_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false"}, "core_node8":{  
 "core":"awhollynewcollection_0_shard2_replica_n6",   
"base_url":"http://127.0.0.1:54026/solr;,   
"node_name":"127.0.0.1:54026_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"} with 
live_nodes=[127.0.0.1:54022_solr, 127.0.0.1:54026_solr, 127.0.0.1:54031_solr, 
127.0.0.1:54020_solr]

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:54031/solr/awhollynewcollection_0: No 
registered leader was found after waiting for 4000ms , collection: 
awhollynewcollection_0 slice: shard2 saw 
state=DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/9)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"awhollynewcollection_0_shard1_replica_n1",
  "base_url":"http://127.0.0.1:54020/solr;,
  "node_name":"127.0.0.1:54020_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false"},
"core_node5":{
  "core":"awhollynewcollection_0_shard1_replica_n2",
  "base_url":"http://127.0.0.1:54031/solr;,
  "node_name":"127.0.0.1:54031_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"}}},
"shard2":{
  "range":"0-7fff",
  "state":"active",
  "replicas":{
"core_node7":{
  "core":"awhollynewcollection_0_shard2_replica_n4",
  "base_url":"http://127.0.0.1:54022/solr;,
  "node_name":"127.0.0.1:54022_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false"},
"core_node8":{
  "core":"awhollynewcollection_0_shard2_replica_n6",
  "base_url":"http://127.0.0.1:54026/solr;,
  "node_name":"127.0.0.1:54026_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"} with live_nodes=[127.0.0.1:54022_solr, 
127.0.0.1:54026_solr, 127.0.0.1:54031_solr, 127.0.0.1:54020_solr]
at 
__randomizedtesting.SeedInfo.seed([1402D11130DCB533:5C77A5A536EF9AA6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 

[jira] [Commented] (SOLR-12412) Leader should give up leadership when IndexWriter.tragedy occur

2018-07-26 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558626#comment-16558626
 ] 

Steve Rowe commented on SOLR-12412:
---

ASF Jenkins found a reproducing seed for a {{LeaderTragicEventTest}} failure 
[https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/271/]:

{noformat}
Checking out Revision 950b7b6b1b92849721eaed50ecad9711199180e8 
(refs/remotes/origin/branch_7x)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=LeaderTragicEventTest -Dtests.seed=14F869F052BC897B 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=de-DE -Dtests.timezone=US/Michigan -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] ERROR   0.00s J1 | LeaderTragicEventTest (suite) <<<
   [junit4]> Throwable #1: java.lang.AssertionError: ObjectTracker found 1 
object(s) that were not released!!! [TransactionLog]
   [junit4]> 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.TransactionLog
   [junit4]>at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
   [junit4]>at 
org.apache.solr.update.TransactionLog.(TransactionLog.java:188)
   [junit4]>at 
org.apache.solr.update.UpdateLog.newTransactionLog(UpdateLog.java:467)
   [junit4]>at 
org.apache.solr.update.UpdateLog.ensureLog(UpdateLog.java:1323)
   [junit4]>at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:571)
   [junit4]>at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:551)
   [junit4]>at 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:345)
   [junit4]>at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:283)
   [junit4]>at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:233)
   [junit4]>at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
   [junit4]>at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
   [junit4]>at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:951)
   [junit4]>at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1167)
   [junit4]>at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:634)
   [junit4]>at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
   [junit4]>at 
org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
   [junit4]>at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:188)
   [junit4]>at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:144)
   [junit4]>at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:311)
   [junit4]>at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
   [junit4]>at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:130)
   [junit4]>at 
org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:276)
   [junit4]>at 
org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)
   [junit4]>at 
org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:178)
   [junit4]>at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:195)
   [junit4]>at 
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:109)
   [junit4]>at 
org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55)
   [junit4]>at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
   [junit4]>at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
   [junit4]>at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
   [junit4]>at 
org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
   [junit4]>at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
   [junit4]>at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
   [junit4]>at 

[GitHub] lucene-solr issue #425: WIP SOLR-12555: refactor tests to use expectThrows

2018-07-26 Thread barrotsteindev
Github user barrotsteindev commented on the issue:

https://github.com/apache/lucene-solr/pull/425
  
Hopefully this is satisfactory after the last commit changes.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8431) Allow top collectors to compute lower bounds of the total hit count

2018-07-26 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8431:


 Summary: Allow top collectors to compute lower bounds of the total 
hit count
 Key: LUCENE-8431
 URL: https://issues.apache.org/jira/browse/LUCENE-8431
 Project: Lucene - Core
  Issue Type: Sub-task
Reporter: Adrien Grand


As discussed on LUCENE-8060, we should make TopScoreDocCollector and 
TopFieldCollector take a minimum hit count to compute rather than a boolean 
that says whether or not to track hits. This will help implement simple 
pagination or give a sense of the fact that there are "plenty" of hits.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8428) Allow configurable sentinels in PriorityQueue

2018-07-26 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558546#comment-16558546
 ] 

Varun Thacker commented on LUCENE-8428:
---

+1. LGTM

> Allow configurable sentinels in PriorityQueue
> -
>
> Key: LUCENE-8428
> URL: https://issues.apache.org/jira/browse/LUCENE-8428
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8428.patch
>
>
> This is a follow-up to SOLR-12587: Lucene's PriorityQueue API makes it 
> impossible to have a configurable sentinel object since the parent 
> constructor is called before a sub class has the opportunity to set anything 
> that helps create those sentinels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11542) Add URP to route time partitioned collections

2018-07-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558508#comment-16558508
 ] 

ASF subversion and git services commented on SOLR-11542:


Commit 8120d84219d77a04ca3663d0f85e47641d7bd5be in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8120d84 ]

SOLR-11542: Add more logging via @LogLevel to diagnose rare failures

(cherry picked from commit e2b08a4)


> Add URP to route time partitioned collections
> -
>
> Key: SOLR-11542
> URL: https://issues.apache.org/jira/browse/SOLR-11542
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR_11542_time_series_URP.patch, 
> SOLR_11542_time_series_URP.patch, SOLR_11542_time_series_URP.patch
>
>
> Assuming we have some time partitioning metadata on an alias (see SOLR-11487 
> for the metadata facility), we'll then need to route documents to the right 
> collection.  I propose a new URP.  _(edit: originally it was thought 
> DistributedURP would be modified but thankfully we can avoid that)._
> The scope of this issue is:
> * decide on some alias metadata names & semantics
> * decide the collection suffix pattern.  Read/write code (needed to route).
> * the routing code
> No new partition creation nor deletion happens is this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11542) Add URP to route time partitioned collections

2018-07-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558506#comment-16558506
 ] 

ASF subversion and git services commented on SOLR-11542:


Commit e2b08a4d473e68ca5f1b868cc55f550585221be7 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e2b08a4 ]

SOLR-11542: Add more logging via @LogLevel to diagnose rare failures


> Add URP to route time partitioned collections
> -
>
> Key: SOLR-11542
> URL: https://issues.apache.org/jira/browse/SOLR-11542
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.2
>
> Attachments: SOLR_11542_time_series_URP.patch, 
> SOLR_11542_time_series_URP.patch, SOLR_11542_time_series_URP.patch
>
>
> Assuming we have some time partitioning metadata on an alias (see SOLR-11487 
> for the metadata facility), we'll then need to route documents to the right 
> collection.  I propose a new URP.  _(edit: originally it was thought 
> DistributedURP would be modified but thankfully we can avoid that)._
> The scope of this issue is:
> * decide on some alias metadata names & semantics
> * decide the collection suffix pattern.  Read/write code (needed to route).
> * the routing code
> No new partition creation nor deletion happens is this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-11-ea+23) - Build # 7448 - Still Unstable!

2018-07-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7448/
Java: 64bit/jdk-11-ea+23 -XX:+UseCompressedOops -XX:+UseSerialGC

16 tests failed.
FAILED:  
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.testFullImportFieldsParam

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_AD5722368EA3CE4A-001\tempDir-006\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_AD5722368EA3CE4A-001\tempDir-006\collection1

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_AD5722368EA3CE4A-001\tempDir-006:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_AD5722368EA3CE4A-001\tempDir-006
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_AD5722368EA3CE4A-001\tempDir-006\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_AD5722368EA3CE4A-001\tempDir-006\collection1
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_AD5722368EA3CE4A-001\tempDir-006:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestSolrEntityProcessorEndToEnd_AD5722368EA3CE4A-001\tempDir-006

at 
__randomizedtesting.SeedInfo.seed([AD5722368EA3CE4A:5A9C6DFF23A17D97]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:318)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd$SolrInstance.tearDown(TestSolrEntityProcessorEndToEnd.java:361)
at 
org.apache.solr.handler.dataimport.TestSolrEntityProcessorEndToEnd.tearDown(TestSolrEntityProcessorEndToEnd.java:142)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:992)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 271 - Still Failing

2018-07-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/271/

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.LeaderTragicEventTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [TransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.TransactionLog  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.update.TransactionLog.(TransactionLog.java:188)  at 
org.apache.solr.update.UpdateLog.newTransactionLog(UpdateLog.java:467)  at 
org.apache.solr.update.UpdateLog.ensureLog(UpdateLog.java:1323)  at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:571)  at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:551)  at 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:345)
  at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:283)
  at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:233)
  at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:951)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1167)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:634)
  at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
  at 
org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)  
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:188)
  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:144)
  at org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:311) 
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:130)
  at org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:276) 
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)  at 
org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:178)  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:195)
  at 
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:109)
  at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55)  
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
  at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1642)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:674)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at 

Re: SynonymGraphFilter followed by StopFilter

2018-07-26 Thread Andrea Gazzarini

Hi Walter,
many thanks for the response and without any constraint at all, I would 
agree with you. From your message I clearly understand your experience 
is greater than mine. My 2 cents inline below:


> Move the synonym filter to the index analyzer chain. That provides 
better performance and avoids some surprising relevance behavior. With 
synonyms at query time, you’ll see different idf for terms in the 
synonym set, with the rare variant scoring higher. That is probably the 
opposite of what is expected.


Unfortunately moving the synonym filter to the index analyzer is not an 
option: the project where I'm working on has a huge index and the 
synonyms list is something that (at least in this stage) frequently 
changes; re-index everything from scratch each time a change occurs is a 
big problem. On the other side, the IDF issue you mention doesn't 
produce so many unwanted effect, at least until now. But I got the 
point, thanks for the hint.


> Also, phrase synonyms just don’t work at query time because the terms 
are parsed into individual tokens by the query parser, not the tokenizer.
Here I dont' get you: using the SynonymGraph Filter + SplitOnWhiteSpace 
= false + AutoGeneratePhraseQueries I get the synonym phrasing correctly 
working (see the first example in my email).


> Don’t use stop words. Just remove that line. Removing stop words is a 
performance and space hack that was useful in the 1960’s, but causes 
problems now. I’ve never used stop word removal and I started in search 
with Infoseek in 1996. Stop word removal is like a binary idf, ignoring 
common words. Since we have idf, we can give a lower score to common 
words and keep them in the index.


And this is, as I see, something which animated long discussions around 
using / avoiding stopwords. I will check your suggestion, what it means 
to apply that approach to my project, but in meantime I think, also 
looking at the JIRA Alan pointed in his answer, the issue is there, and 
it's real; I mean, it is something that it doesn't work as expected (my 
use case, as far as I understand, is just an example because the thing 
is broader and it is related to the FilteredTokenFilter)


Thanks again,
Andrea

On 26/07/18 16:59, Walter Underwood wrote:
Move the synonym filter to the index analyzer chain. That provides 
better performance and avoids some surprising relevance behavior. With 
synonyms at query time, you’ll see different idf for terms in the 
synonym set, with the rare variant scoring higher. That is probably 
the opposite of what is expected.


Also, phrase synonyms just don’t work at query time because the terms 
are parsed into individual tokens by the query parser, not the tokenizer.


Don’t use stop words. Just remove that line. Removing stop words is a 
performance and space hack that was useful in the 1960’s, but causes 
problems now. I’ve never used stop word removal and I started in 
search with Infoseek in 1996. Stop word removal is like a binary idf, 
ignoring common words. Since we have idf, we can give a lower score to 
common words and keep them in the index.


Do those two things and it should work as you expect.

wunder
Walter Underwood
wun...@wunderwood.org 
http://observer.wunderwood.org/  (my blog)

On Jul 26, 2018, at 3:23 AM, Andrea Gazzarini > wrote:


Hi Alan, thanks for the response and thank you very much for the pointers


On 26/07/18 12:16, Alan Woodward wrote:

Hi Andrea,

This is a long-standing issue: see 
https://issues.apache.org/jira/browse/LUCENE-4065 and 
https://issues.apache.org/jira/browse/LUCENE-8250 for discussion.  I 
don’t think we’ve reached a consensus on how to fix it yet, but more 
examples are good.


Unfortunately I don’t think changing the StopFilter to ignore 
SYNONYM tokens will work, because then you’ll generate queries that 
always fail - they’ll search for ‘of’ in the middle of the phrase, 
but ‘of’ never gets indexed because it’s removed by the StopFilter 
at index time.


- Alan

On 26 Jul 2018, at 08:04, Andrea Gazzarini > wrote:


Hi,
I have the following field type definition:
autoGeneratePhraseQueries="true">

 
 
 
 
 
 
 
 synonyms="synonyms.txt" ignoreCase="false" expand="true"/>
 ignoreCase="false"/>

 

Where synonyms and stopwords are defined as follows:

synonyms = out of warranty,oow
stopwords = of

Running the following query:

q=my tv went out *of* warranty something *of*

I get wrong results, with the following explain:

title:my title:tv title:went (title:oow *PhraseQuery(title:"out ? 
warranty something"))*


That is, the synonyms is correctly detected, I see the graph 
information are correctly reported in the positionLength, it seems 
they are wrongly interpreted by the QueryParser.
I guess the reason is the "of" removal operated by the StopFilter, 
which


  * removes the "of" term within the phrase 

Re: SynonymGraphFilter followed by StopFilter

2018-07-26 Thread Alan Woodward
> Also, phrase synonyms just don’t work at query time because the terms are 
> parsed into individual tokens by the query parser, not the tokenizer.

This is no longer the case.  In general I’d avoid index-time synonyms in lucene 
because synonyms can create graphs (eg if a single term gets expanded to 
several terms), and we can’t index graphs correctly.

I’d agree that removing stop words is generally unnecessary, but there are 
other reasons that you’d want to filter out terms from the Tokenstream, and we 
should be able to handle those situations correctly.

> On 26 Jul 2018, at 15:59, Walter Underwood  wrote:
> 
> Move the synonym filter to the index analyzer chain. That provides better 
> performance and avoids some surprising relevance behavior. With synonyms at 
> query time, you’ll see different idf for terms in the synonym set, with the 
> rare variant scoring higher. That is probably the opposite of what is 
> expected.
> 
> Also, phrase synonyms just don’t work at query time because the terms are 
> parsed into individual tokens by the query parser, not the tokenizer.
> 
> Don’t use stop words. Just remove that line. Removing stop words is a 
> performance and space hack that was useful in the 1960’s, but causes problems 
> now. I’ve never used stop word removal and I started in search with Infoseek 
> in 1996. Stop word removal is like a binary idf, ignoring common words. Since 
> we have idf, we can give a lower score to common words and keep them in the 
> index. 
> 
> Do those two things and it should work as you expect. 
> 
> wunder
> Walter Underwood
> wun...@wunderwood.org 
> http://observer.wunderwood.org/  (my blog)
> 
>> On Jul 26, 2018, at 3:23 AM, Andrea Gazzarini > > wrote:
>> 
>> Hi Alan, thanks for the response and thank you very much for the pointers
>> 
>> On 26/07/18 12:16, Alan Woodward wrote:
>>> Hi Andrea,
>>> 
>>> This is a long-standing issue: see 
>>> https://issues.apache.org/jira/browse/LUCENE-4065 
>>>  and 
>>> https://issues.apache.org/jira/browse/LUCENE-8250 
>>>  for discussion.  I 
>>> don’t think we’ve reached a consensus on how to fix it yet, but more 
>>> examples are good.
>>> 
>>> Unfortunately I don’t think changing the StopFilter to ignore SYNONYM 
>>> tokens will work, because then you’ll generate queries that always fail - 
>>> they’ll search for ‘of’ in the middle of the phrase, but ‘of’ never gets 
>>> indexed because it’s removed by the StopFilter at index time.
>>> 
>>> - Alan
>>> 
 On 26 Jul 2018, at 08:04, Andrea Gazzarini >>> > wrote:
 
 Hi, 
 I have the following field type definition: 
 >>> autoGeneratePhraseQueries="true">
 
 
 
 
 
 
 
 >>> synonyms="synonyms.txt" ignoreCase="false" expand="true"/>
 >>> ignoreCase="false"/>
 
 
 Where synonyms and stopwords are defined as follows: 
 
 synonyms = out of warranty,oow
 stopwords = of
 
 Running the following query:
 
 q=my tv went out of warranty something of
 
 I get wrong results, with the following explain: 
 
 title:my title:tv title:went (title:oow PhraseQuery(title:"out ? warranty 
 something"))
 
 That is, the synonyms is correctly detected, I see the graph information 
 are correctly reported in the positionLength, it seems they are wrongly 
 interpreted by the QueryParser. 
 I guess the reason is the "of" removal operated by the StopFilter, which 
 removes the "of" term within the phrase (I wouldn't want that)
 creates a "hole" in the span defined by the "oow" term, which has been 
 marked as a synonym with a positionLength = 3, therefore including the 
 next available term (something). 
 I tried to change the StopFilter in order to ignore stopwords that are 
 marked as SYNONYM or that are part of a previous synonym span, and it 
 works: it correctly produces the following query: 
 
 title:my title:tv title:went (title:oow PhraseQuery(title:"out of 
 warranty")) title:something
 
 So I'd like to ask your opinion about this. Am I missing something? Do you 
 think it's better to open a JIRA issue? If the solution is a graph aware 
 stop filter, do you think it's better to change the existing filter or to 
 subclass it?
 
 Best, 
 Andrea
 
 
>>> 
>> 
> 



Re: Tip: patches from IntelliJ IDEA

2018-07-26 Thread David Smiley
Sorry everyone for the noise; it turned out to be a red herring.  Yetus can
handle patches created by IntelliJ fine.  If it didn't, Yetus will still
comment and loudly complain of such a patch problem (as evidenced in other
projects using Yetus).  Allen Wittenauer was instrumental in helping out
here.

On Wed, Jul 25, 2018 at 2:31 PM David Smiley 
wrote:

> Maybe... though I find the quickness of a bash/sed script more desirable.
> Our ant build has a lot going on as it is.  I could add this to dev-tools
> somewhere.
>
> I posted here: https://issues.apache.org/jira/browse/YETUS-645 and it is
> getting some traction so lets see where it leads.
>
> On Wed, Jul 25, 2018 at 1:13 PM Erik Hatcher 
> wrote:
>
>> David -
>>
>> Would it make sense to bake that into the build file so that it's
>> immediately handy?maybe `ant idea-patch-fix`?
>>
>> Erik
>>
>>
>>
>> On Jul 25, 2018, at 10:35 AM, David Smiley 
>> wrote:
>>
>> I use IntelliJ IDEA, and furthermore I use the "create patch" feature to
>> generate a patch file.  This is far more convenient than using the CLI when
>> there are multiple "change lists", and for other reasons, since at the CLI
>> I would have to presumably list out each changed file to include.
>>
>> However, IntelliJ's patches aren't always compatible with other tools
>> that consume patch files.  We use Apache Yetus and it doesn't like them --
>> it won't even kick off a build so you'll never see a comment from it in the
>> related JIRA issue.  JetBrains is tracking this patch compatibility
>> deficiency and they may improve it in the future but it's been years.
>>
>> I wrote the following one-liner script on my path that I use to convert a
>> patch file in-place.  The only thing that it does that is necessary to make
>> Yetus like it is to add the "a/" and "b/" to the file paths in the patch.
>> Here it is:
>>
>> # see https://youtrack.jetbrains.com/issue/IDEA-92793
>> sed -i '' -e 's/^--- /--- a\//g' -e 's/^+++ /+++ b\//g' "$1"
>>
>> I'm sharing this so others know of the issue and may want to use this
>> script as well.  I will report it to Yetus; maybe they'll include detection
>> of when to do this so I don't have to remember to.
>>
>> ~ David
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
>>
>> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (LUCENE-8369) Remove the spatial module as it is obsolete

2018-07-26 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558426#comment-16558426
 ] 

David Smiley commented on LUCENE-8369:
--

Thanks for your support [~aw]!

> Remove the spatial module as it is obsolete
> ---
>
> Key: LUCENE-8369
> URL: https://issues.apache.org/jira/browse/LUCENE-8369
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8369.patch
>
>
> The "spatial" module is at this juncture nearly empty with only a couple 
> utilities that aren't used by anything in the entire codebase -- 
> GeoRelationUtils, and MortonEncoder.  Perhaps it should have been removed 
> earlier in LUCENE-7664 which was the removal of GeoPointField which was 
> essentially why the module existed.  Better late than never.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8060) Enable top-docs collection optimizations by default

2018-07-26 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8060:
-
Summary: Enable top-docs collection optimizations by default  (was: Require 
users to tell us whether they need total hit counts)

> Enable top-docs collection optimizations by default
> ---
>
> Key: LUCENE-8060
> URL: https://issues.apache.org/jira/browse/LUCENE-8060
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
>
> We are getting optimizations when hit counts are not required (sorted 
> indexes, MAXSCORE, short-circuiting of phrase queries) but our users won't 
> benefit from them unless we disable exact hit counts by default or we require 
> them to tell us whether hit counts are required.
> I think making hit counts approximate by default is going to be a bit trappy, 
> so I'm rather leaning towards requiring users to tell us explicitly whether 
> they need total hit counts. I can think of two ways to do that: either by 
> passing a boolean to the IndexSearcher constructor or by adding a boolean to 
> all methods that produce TopDocs instances. I like the latter better but I'm 
> open to discussion or other ideas?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205468112
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/DeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.response.transform;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexableField;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.SortedDocValues;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.SortField;
+import org.apache.lucene.search.join.BitSetProducer;
+import org.apache.lucene.search.join.ToChildBlockJoinQuery;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.DocsStreamer;
+import org.apache.solr.schema.FieldType;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.schema.SchemaField;
+import org.apache.solr.search.DocIterator;
+import org.apache.solr.search.DocList;
+import org.apache.solr.search.SolrDocumentFetcher;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.SolrReturnFields;
+
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.NUM_SEP_CHAR;
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.PATH_SEP_CHAR;
+import static org.apache.solr.schema.IndexSchema.NEST_PATH_FIELD_NAME;
+
+class DeeplyNestedChildDocTransformer extends DocTransformer {
+
+  private final String name;
+  protected final SchemaField idField;
+  protected final SolrQueryRequest req;
+  protected final IndexSchema schema;
+  private BitSetProducer parentsFilter;
+  protected int limit;
+  private final static Sort docKeySort = new Sort(new SortField(null, 
SortField.Type.DOC, false));
+  private Query childFilterQuery;
+
+  public DeeplyNestedChildDocTransformer(String name, final BitSetProducer 
parentsFilter,
+ final SolrQueryRequest req, final 
Query childFilterQuery, int limit) {
+this.name = name;
+this.schema = req.getSchema();
+this.idField = this.schema.getUniqueKeyField();
+this.req = req;
+this.parentsFilter = parentsFilter;
+this.limit = limit;
+this.childFilterQuery = childFilterQuery;
+  }
+
+  @Override
+  public String getName()  {
+return name;
+  }
+
+  @Override
+  public String[] getExtraRequestFields() {
+// we always need the idField (of the parent) in order to fill out 
it's children
+return new String[] { idField.getName() };
+  }
+
+  @Override
+  public void transform(SolrDocument rootDoc, int rootDocId) {
+
+FieldType idFt = idField.getType();
+
+String rootIdExt = 
getSolrFieldString(rootDoc.getFirstValue(idField.getName()), idFt);
+
+try {
+  Query parentQuery = idFt.getFieldQuery(null, idField, rootIdExt);
+  Query query = new ToChildBlockJoinQuery(parentQuery, parentsFilter);
+  SolrIndexSearcher searcher = context.getSearcher();
+  DocList children = searcher.getDocList(query, childFilterQuery, 
docKeySort, 0, limit);
+  long segAndId = searcher.lookupId(new BytesRef(rootIdExt));
+  final int seg = (int) (segAndId >> 32);
+  final LeafReaderContext leafReaderContext = 
searcher.getIndexReader().leaves().get(seg);
+  final SortedDocValues 

[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205488727
  
--- Diff: 
solr/core/src/test/org/apache/solr/response/transform/TestDeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.response.transform;
+
+import java.util.Iterator;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import com.google.common.collect.Iterables;
+import org.apache.lucene.document.StoredField;
+import org.apache.solr.SolrTestCaseJ4;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.BasicResultContext;
+import org.junit.After;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestDeeplyNestedChildDocTransformer extends SolrTestCaseJ4 {
+
+  private static AtomicInteger counter = new AtomicInteger();
+  private static final char PATH_SEP_CHAR = '/';
+  private static final String[] types = {"donut", "cake"};
+  private static final String[] ingredients = {"flour", "cocoa", 
"vanilla"};
+  private static final Iterator ingredientsCycler = 
Iterables.cycle(ingredients).iterator();
+  private static final String[] names = {"Yaz", "Jazz", "Costa"};
+
+  @BeforeClass
+  public static void beforeClass() throws Exception {
+initCore("solrconfig-update-processor-chains.xml", "schema15.xml");
+  }
+
+  @After
+  public void after() throws Exception {
+assertU(delQ("*:*"));
+assertU(commit());
+counter.set(0); // reset id counter
+  }
+
+  @Test
+  public void testParentFilterJSON() throws Exception {
+indexSampleData(10);
+String[] tests = new String[] {
+"/response/docs/[0]/type_s==[donut]",
+"/response/docs/[0]/toppings/[0]/type_s==[Regular]",
+"/response/docs/[0]/toppings/[1]/type_s==[Chocolate]",
+"/response/docs/[0]/toppings/[0]/ingredients/[0]/name_s==[cocoa]",
+"/response/docs/[0]/toppings/[1]/ingredients/[1]/name_s==[cocoa]",
+"/response/docs/[0]/lonely/test_s==[testing]",
+"/response/docs/[0]/lonely/lonelyGrandChild/test2_s==[secondTest]",
+};
+
+try(SolrQueryRequest req = req("q", "type_s:donut", "sort", "id asc", 
"fl", "*, _nest_path_, [child hierarchy=true]")) {
+  BasicResultContext res = (BasicResultContext) 
h.queryAndResponse("/select", req).getResponse();
+  Iterator docsStreamer = res.getProcessedDocuments();
+  while (docsStreamer.hasNext()) {
+SolrDocument doc = docsStreamer.next();
+int currDocId = Integer.parseInt(((StoredField) 
doc.getFirstValue("id")).stringValue());
+assertEquals("queried docs are not equal to expected output for 
id: " + currDocId, fullNestedDocTemplate(currDocId), doc.toString());
+  }
+}
+
+assertJQ(req("q", "type_s:donut",
+"sort", "id asc",
+"fl", "*, _nest_path_, [child hierarchy=true]"),
+tests);
+  }
+
+  @Test
+  public void testExactPath() throws Exception {
+indexSampleData(2);
+String[] tests = {
+"/response/numFound==4",
+"/response/docs/[0]/_nest_path_=='toppings#0'",
+"/response/docs/[1]/_nest_path_=='toppings#0'",
+"/response/docs/[2]/_nest_path_=='toppings#1'",
+"/response/docs/[3]/_nest_path_=='toppings#1'",
+};
+
+assertJQ(req("q", "_nest_path_:*toppings/",
+"sort", "_nest_path_ asc",
+"fl", "*, _nest_path_"),
+tests);
+
+assertJQ(req("q", "+_nest_path_:\"toppings/\"",
+"sort", "_nest_path_ asc",
+"fl", "*, _nest_path_"),
+tests);
+  }
+
+  @Test
+  public void testChildFilterJSON() throws Exception {
+indexSampleData(10);
+String[] tests = new String[] {
+

[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205477055
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/DeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.response.transform;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexableField;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.SortedDocValues;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.SortField;
+import org.apache.lucene.search.join.BitSetProducer;
+import org.apache.lucene.search.join.ToChildBlockJoinQuery;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.DocsStreamer;
+import org.apache.solr.schema.FieldType;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.schema.SchemaField;
+import org.apache.solr.search.DocIterator;
+import org.apache.solr.search.DocList;
+import org.apache.solr.search.SolrDocumentFetcher;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.SolrReturnFields;
+
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.NUM_SEP_CHAR;
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.PATH_SEP_CHAR;
+import static org.apache.solr.schema.IndexSchema.NEST_PATH_FIELD_NAME;
+
+class DeeplyNestedChildDocTransformer extends DocTransformer {
+
+  private final String name;
+  protected final SchemaField idField;
+  protected final SolrQueryRequest req;
+  protected final IndexSchema schema;
+  private BitSetProducer parentsFilter;
+  protected int limit;
+  private final static Sort docKeySort = new Sort(new SortField(null, 
SortField.Type.DOC, false));
+  private Query childFilterQuery;
+
+  public DeeplyNestedChildDocTransformer(String name, final BitSetProducer 
parentsFilter,
+ final SolrQueryRequest req, final 
Query childFilterQuery, int limit) {
+this.name = name;
+this.schema = req.getSchema();
+this.idField = this.schema.getUniqueKeyField();
+this.req = req;
+this.parentsFilter = parentsFilter;
+this.limit = limit;
+this.childFilterQuery = childFilterQuery;
+  }
+
+  @Override
+  public String getName()  {
+return name;
+  }
+
+  @Override
+  public String[] getExtraRequestFields() {
+// we always need the idField (of the parent) in order to fill out 
it's children
+return new String[] { idField.getName() };
+  }
+
+  @Override
+  public void transform(SolrDocument rootDoc, int rootDocId) {
+
+FieldType idFt = idField.getType();
+
+String rootIdExt = 
getSolrFieldString(rootDoc.getFirstValue(idField.getName()), idFt);
+
+try {
+  Query parentQuery = idFt.getFieldQuery(null, idField, rootIdExt);
+  Query query = new ToChildBlockJoinQuery(parentQuery, parentsFilter);
+  SolrIndexSearcher searcher = context.getSearcher();
+  DocList children = searcher.getDocList(query, childFilterQuery, 
docKeySort, 0, limit);
+  long segAndId = searcher.lookupId(new BytesRef(rootIdExt));
+  final int seg = (int) (segAndId >> 32);
+  final LeafReaderContext leafReaderContext = 
searcher.getIndexReader().leaves().get(seg);
+  final SortedDocValues 

[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205466774
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/DeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.response.transform;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexableField;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.SortedDocValues;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.SortField;
+import org.apache.lucene.search.join.BitSetProducer;
+import org.apache.lucene.search.join.ToChildBlockJoinQuery;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.DocsStreamer;
+import org.apache.solr.schema.FieldType;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.schema.SchemaField;
+import org.apache.solr.search.DocIterator;
+import org.apache.solr.search.DocList;
+import org.apache.solr.search.SolrDocumentFetcher;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.SolrReturnFields;
+
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.NUM_SEP_CHAR;
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.PATH_SEP_CHAR;
+import static org.apache.solr.schema.IndexSchema.NEST_PATH_FIELD_NAME;
+
+class DeeplyNestedChildDocTransformer extends DocTransformer {
--- End diff --

As with our URP, lets forgo the "Deeply" terminology.  I hope this will 
simply be how any nested docs in the future are done rather than making a 
distinction.  


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205475640
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/DeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.response.transform;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexableField;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.SortedDocValues;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.SortField;
+import org.apache.lucene.search.join.BitSetProducer;
+import org.apache.lucene.search.join.ToChildBlockJoinQuery;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.DocsStreamer;
+import org.apache.solr.schema.FieldType;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.schema.SchemaField;
+import org.apache.solr.search.DocIterator;
+import org.apache.solr.search.DocList;
+import org.apache.solr.search.SolrDocumentFetcher;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.SolrReturnFields;
+
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.NUM_SEP_CHAR;
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.PATH_SEP_CHAR;
+import static org.apache.solr.schema.IndexSchema.NEST_PATH_FIELD_NAME;
+
+class DeeplyNestedChildDocTransformer extends DocTransformer {
+
+  private final String name;
+  protected final SchemaField idField;
+  protected final SolrQueryRequest req;
+  protected final IndexSchema schema;
+  private BitSetProducer parentsFilter;
+  protected int limit;
+  private final static Sort docKeySort = new Sort(new SortField(null, 
SortField.Type.DOC, false));
+  private Query childFilterQuery;
+
+  public DeeplyNestedChildDocTransformer(String name, final BitSetProducer 
parentsFilter,
+ final SolrQueryRequest req, final 
Query childFilterQuery, int limit) {
+this.name = name;
+this.schema = req.getSchema();
+this.idField = this.schema.getUniqueKeyField();
+this.req = req;
+this.parentsFilter = parentsFilter;
+this.limit = limit;
+this.childFilterQuery = childFilterQuery;
+  }
+
+  @Override
+  public String getName()  {
+return name;
+  }
+
+  @Override
+  public String[] getExtraRequestFields() {
+// we always need the idField (of the parent) in order to fill out 
it's children
+return new String[] { idField.getName() };
+  }
+
+  @Override
+  public void transform(SolrDocument rootDoc, int rootDocId) {
+
+FieldType idFt = idField.getType();
+
+String rootIdExt = 
getSolrFieldString(rootDoc.getFirstValue(idField.getName()), idFt);
+
+try {
+  Query parentQuery = idFt.getFieldQuery(null, idField, rootIdExt);
+  Query query = new ToChildBlockJoinQuery(parentQuery, parentsFilter);
+  SolrIndexSearcher searcher = context.getSearcher();
+  DocList children = searcher.getDocList(query, childFilterQuery, 
docKeySort, 0, limit);
+  long segAndId = searcher.lookupId(new BytesRef(rootIdExt));
+  final int seg = (int) (segAndId >> 32);
+  final LeafReaderContext leafReaderContext = 
searcher.getIndexReader().leaves().get(seg);
+  final SortedDocValues 

[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205475272
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/DeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.response.transform;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexableField;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.SortedDocValues;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.SortField;
+import org.apache.lucene.search.join.BitSetProducer;
+import org.apache.lucene.search.join.ToChildBlockJoinQuery;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.DocsStreamer;
+import org.apache.solr.schema.FieldType;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.schema.SchemaField;
+import org.apache.solr.search.DocIterator;
+import org.apache.solr.search.DocList;
+import org.apache.solr.search.SolrDocumentFetcher;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.SolrReturnFields;
+
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.NUM_SEP_CHAR;
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.PATH_SEP_CHAR;
+import static org.apache.solr.schema.IndexSchema.NEST_PATH_FIELD_NAME;
+
+class DeeplyNestedChildDocTransformer extends DocTransformer {
+
+  private final String name;
+  protected final SchemaField idField;
+  protected final SolrQueryRequest req;
+  protected final IndexSchema schema;
+  private BitSetProducer parentsFilter;
+  protected int limit;
+  private final static Sort docKeySort = new Sort(new SortField(null, 
SortField.Type.DOC, false));
+  private Query childFilterQuery;
+
+  public DeeplyNestedChildDocTransformer(String name, final BitSetProducer 
parentsFilter,
+ final SolrQueryRequest req, final 
Query childFilterQuery, int limit) {
+this.name = name;
+this.schema = req.getSchema();
+this.idField = this.schema.getUniqueKeyField();
+this.req = req;
+this.parentsFilter = parentsFilter;
+this.limit = limit;
+this.childFilterQuery = childFilterQuery;
+  }
+
+  @Override
+  public String getName()  {
+return name;
+  }
+
+  @Override
+  public String[] getExtraRequestFields() {
+// we always need the idField (of the parent) in order to fill out 
it's children
+return new String[] { idField.getName() };
+  }
+
+  @Override
+  public void transform(SolrDocument rootDoc, int rootDocId) {
+
+FieldType idFt = idField.getType();
+
+String rootIdExt = 
getSolrFieldString(rootDoc.getFirstValue(idField.getName()), idFt);
+
+try {
+  Query parentQuery = idFt.getFieldQuery(null, idField, rootIdExt);
+  Query query = new ToChildBlockJoinQuery(parentQuery, parentsFilter);
+  SolrIndexSearcher searcher = context.getSearcher();
+  DocList children = searcher.getDocList(query, childFilterQuery, 
docKeySort, 0, limit);
+  long segAndId = searcher.lookupId(new BytesRef(rootIdExt));
+  final int seg = (int) (segAndId >> 32);
+  final LeafReaderContext leafReaderContext = 
searcher.getIndexReader().leaves().get(seg);
+  final SortedDocValues 

[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205480076
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/DeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.response.transform;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexableField;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.SortedDocValues;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.SortField;
+import org.apache.lucene.search.join.BitSetProducer;
+import org.apache.lucene.search.join.ToChildBlockJoinQuery;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.DocsStreamer;
+import org.apache.solr.schema.FieldType;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.schema.SchemaField;
+import org.apache.solr.search.DocIterator;
+import org.apache.solr.search.DocList;
+import org.apache.solr.search.SolrDocumentFetcher;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.SolrReturnFields;
+
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.NUM_SEP_CHAR;
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.PATH_SEP_CHAR;
+import static org.apache.solr.schema.IndexSchema.NEST_PATH_FIELD_NAME;
+
+class DeeplyNestedChildDocTransformer extends DocTransformer {
+
+  private final String name;
+  protected final SchemaField idField;
+  protected final SolrQueryRequest req;
+  protected final IndexSchema schema;
+  private BitSetProducer parentsFilter;
+  protected int limit;
+  private final static Sort docKeySort = new Sort(new SortField(null, 
SortField.Type.DOC, false));
+  private Query childFilterQuery;
+
+  public DeeplyNestedChildDocTransformer(String name, final BitSetProducer 
parentsFilter,
+ final SolrQueryRequest req, final 
Query childFilterQuery, int limit) {
+this.name = name;
+this.schema = req.getSchema();
+this.idField = this.schema.getUniqueKeyField();
+this.req = req;
+this.parentsFilter = parentsFilter;
+this.limit = limit;
+this.childFilterQuery = childFilterQuery;
+  }
+
+  @Override
+  public String getName()  {
+return name;
+  }
+
+  @Override
+  public String[] getExtraRequestFields() {
+// we always need the idField (of the parent) in order to fill out 
it's children
+return new String[] { idField.getName() };
+  }
+
+  @Override
+  public void transform(SolrDocument rootDoc, int rootDocId) {
+
+FieldType idFt = idField.getType();
+
+String rootIdExt = 
getSolrFieldString(rootDoc.getFirstValue(idField.getName()), idFt);
+
+try {
+  Query parentQuery = idFt.getFieldQuery(null, idField, rootIdExt);
+  Query query = new ToChildBlockJoinQuery(parentQuery, parentsFilter);
+  SolrIndexSearcher searcher = context.getSearcher();
+  DocList children = searcher.getDocList(query, childFilterQuery, 
docKeySort, 0, limit);
+  long segAndId = searcher.lookupId(new BytesRef(rootIdExt));
+  final int seg = (int) (segAndId >> 32);
+  final LeafReaderContext leafReaderContext = 
searcher.getIndexReader().leaves().get(seg);
+  final SortedDocValues 

[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205474348
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/DeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.response.transform;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexableField;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.SortedDocValues;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.SortField;
+import org.apache.lucene.search.join.BitSetProducer;
+import org.apache.lucene.search.join.ToChildBlockJoinQuery;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.DocsStreamer;
+import org.apache.solr.schema.FieldType;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.schema.SchemaField;
+import org.apache.solr.search.DocIterator;
+import org.apache.solr.search.DocList;
+import org.apache.solr.search.SolrDocumentFetcher;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.SolrReturnFields;
+
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.NUM_SEP_CHAR;
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.PATH_SEP_CHAR;
+import static org.apache.solr.schema.IndexSchema.NEST_PATH_FIELD_NAME;
+
+class DeeplyNestedChildDocTransformer extends DocTransformer {
+
+  private final String name;
+  protected final SchemaField idField;
+  protected final SolrQueryRequest req;
+  protected final IndexSchema schema;
+  private BitSetProducer parentsFilter;
+  protected int limit;
+  private final static Sort docKeySort = new Sort(new SortField(null, 
SortField.Type.DOC, false));
+  private Query childFilterQuery;
+
+  public DeeplyNestedChildDocTransformer(String name, final BitSetProducer 
parentsFilter,
+ final SolrQueryRequest req, final 
Query childFilterQuery, int limit) {
+this.name = name;
+this.schema = req.getSchema();
+this.idField = this.schema.getUniqueKeyField();
+this.req = req;
+this.parentsFilter = parentsFilter;
+this.limit = limit;
+this.childFilterQuery = childFilterQuery;
+  }
+
+  @Override
+  public String getName()  {
+return name;
+  }
+
+  @Override
+  public String[] getExtraRequestFields() {
+// we always need the idField (of the parent) in order to fill out 
it's children
+return new String[] { idField.getName() };
+  }
+
+  @Override
+  public void transform(SolrDocument rootDoc, int rootDocId) {
+
+FieldType idFt = idField.getType();
+
+String rootIdExt = 
getSolrFieldString(rootDoc.getFirstValue(idField.getName()), idFt);
+
+try {
+  Query parentQuery = idFt.getFieldQuery(null, idField, rootIdExt);
+  Query query = new ToChildBlockJoinQuery(parentQuery, parentsFilter);
+  SolrIndexSearcher searcher = context.getSearcher();
+  DocList children = searcher.getDocList(query, childFilterQuery, 
docKeySort, 0, limit);
+  long segAndId = searcher.lookupId(new BytesRef(rootIdExt));
+  final int seg = (int) (segAndId >> 32);
+  final LeafReaderContext leafReaderContext = 
searcher.getIndexReader().leaves().get(seg);
+  final SortedDocValues 

[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205482972
  
--- Diff: 
solr/core/src/test/org/apache/solr/response/transform/TestDeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,227 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.response.transform;
+
+import java.util.Iterator;
+import java.util.concurrent.atomic.AtomicInteger;
+
+import com.google.common.collect.Iterables;
+import org.apache.lucene.document.StoredField;
+import org.apache.solr.SolrTestCaseJ4;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.BasicResultContext;
+import org.junit.After;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestDeeplyNestedChildDocTransformer extends SolrTestCaseJ4 {
+
+  private static AtomicInteger counter = new AtomicInteger();
+  private static final char PATH_SEP_CHAR = '/';
+  private static final String[] types = {"donut", "cake"};
+  private static final String[] ingredients = {"flour", "cocoa", 
"vanilla"};
+  private static final Iterator ingredientsCycler = 
Iterables.cycle(ingredients).iterator();
+  private static final String[] names = {"Yaz", "Jazz", "Costa"};
+
+  @BeforeClass
+  public static void beforeClass() throws Exception {
+initCore("solrconfig-update-processor-chains.xml", "schema15.xml");
+  }
+
+  @After
+  public void after() throws Exception {
+assertU(delQ("*:*"));
+assertU(commit());
+counter.set(0); // reset id counter
+  }
+
+  @Test
+  public void testParentFilterJSON() throws Exception {
+indexSampleData(10);
+String[] tests = new String[] {
+"/response/docs/[0]/type_s==[donut]",
+"/response/docs/[0]/toppings/[0]/type_s==[Regular]",
+"/response/docs/[0]/toppings/[1]/type_s==[Chocolate]",
+"/response/docs/[0]/toppings/[0]/ingredients/[0]/name_s==[cocoa]",
+"/response/docs/[0]/toppings/[1]/ingredients/[1]/name_s==[cocoa]",
+"/response/docs/[0]/lonely/test_s==[testing]",
+"/response/docs/[0]/lonely/lonelyGrandChild/test2_s==[secondTest]",
+};
+
+try(SolrQueryRequest req = req("q", "type_s:donut", "sort", "id asc", 
"fl", "*, _nest_path_, [child hierarchy=true]")) {
+  BasicResultContext res = (BasicResultContext) 
h.queryAndResponse("/select", req).getResponse();
+  Iterator docsStreamer = res.getProcessedDocuments();
+  while (docsStreamer.hasNext()) {
+SolrDocument doc = docsStreamer.next();
+int currDocId = Integer.parseInt(((StoredField) 
doc.getFirstValue("id")).stringValue());
+assertEquals("queried docs are not equal to expected output for 
id: " + currDocId, fullNestedDocTemplate(currDocId), doc.toString());
+  }
+}
+
+assertJQ(req("q", "type_s:donut",
+"sort", "id asc",
+"fl", "*, _nest_path_, [child hierarchy=true]"),
+tests);
+  }
+
+  @Test
+  public void testExactPath() throws Exception {
+indexSampleData(2);
+String[] tests = {
+"/response/numFound==4",
+"/response/docs/[0]/_nest_path_=='toppings#0'",
+"/response/docs/[1]/_nest_path_=='toppings#0'",
+"/response/docs/[2]/_nest_path_=='toppings#1'",
+"/response/docs/[3]/_nest_path_=='toppings#1'",
+};
+
+assertJQ(req("q", "_nest_path_:*toppings/",
+"sort", "_nest_path_ asc",
+"fl", "*, _nest_path_"),
+tests);
+
+assertJQ(req("q", "+_nest_path_:\"toppings/\"",
+"sort", "_nest_path_ asc",
+"fl", "*, _nest_path_"),
+tests);
+  }
+
+  @Test
+  public void testChildFilterJSON() throws Exception {
+indexSampleData(10);
+String[] tests = new String[] {
+

[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205478036
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/DeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.response.transform;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexableField;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.SortedDocValues;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.SortField;
+import org.apache.lucene.search.join.BitSetProducer;
+import org.apache.lucene.search.join.ToChildBlockJoinQuery;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.DocsStreamer;
+import org.apache.solr.schema.FieldType;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.schema.SchemaField;
+import org.apache.solr.search.DocIterator;
+import org.apache.solr.search.DocList;
+import org.apache.solr.search.SolrDocumentFetcher;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.SolrReturnFields;
+
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.NUM_SEP_CHAR;
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.PATH_SEP_CHAR;
+import static org.apache.solr.schema.IndexSchema.NEST_PATH_FIELD_NAME;
+
+class DeeplyNestedChildDocTransformer extends DocTransformer {
+
+  private final String name;
+  protected final SchemaField idField;
+  protected final SolrQueryRequest req;
+  protected final IndexSchema schema;
+  private BitSetProducer parentsFilter;
+  protected int limit;
+  private final static Sort docKeySort = new Sort(new SortField(null, 
SortField.Type.DOC, false));
+  private Query childFilterQuery;
+
+  public DeeplyNestedChildDocTransformer(String name, final BitSetProducer 
parentsFilter,
+ final SolrQueryRequest req, final 
Query childFilterQuery, int limit) {
+this.name = name;
+this.schema = req.getSchema();
+this.idField = this.schema.getUniqueKeyField();
+this.req = req;
+this.parentsFilter = parentsFilter;
+this.limit = limit;
+this.childFilterQuery = childFilterQuery;
+  }
+
+  @Override
+  public String getName()  {
+return name;
+  }
+
+  @Override
+  public String[] getExtraRequestFields() {
+// we always need the idField (of the parent) in order to fill out 
it's children
+return new String[] { idField.getName() };
+  }
+
+  @Override
+  public void transform(SolrDocument rootDoc, int rootDocId) {
+
+FieldType idFt = idField.getType();
+
+String rootIdExt = 
getSolrFieldString(rootDoc.getFirstValue(idField.getName()), idFt);
+
+try {
+  Query parentQuery = idFt.getFieldQuery(null, idField, rootIdExt);
+  Query query = new ToChildBlockJoinQuery(parentQuery, parentsFilter);
+  SolrIndexSearcher searcher = context.getSearcher();
+  DocList children = searcher.getDocList(query, childFilterQuery, 
docKeySort, 0, limit);
+  long segAndId = searcher.lookupId(new BytesRef(rootIdExt));
+  final int seg = (int) (segAndId >> 32);
+  final LeafReaderContext leafReaderContext = 
searcher.getIndexReader().leaves().get(seg);
+  final SortedDocValues 

[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205468966
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/DeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.response.transform;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexableField;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.SortedDocValues;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.SortField;
+import org.apache.lucene.search.join.BitSetProducer;
+import org.apache.lucene.search.join.ToChildBlockJoinQuery;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.DocsStreamer;
+import org.apache.solr.schema.FieldType;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.schema.SchemaField;
+import org.apache.solr.search.DocIterator;
+import org.apache.solr.search.DocList;
+import org.apache.solr.search.SolrDocumentFetcher;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.SolrReturnFields;
+
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.NUM_SEP_CHAR;
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.PATH_SEP_CHAR;
+import static org.apache.solr.schema.IndexSchema.NEST_PATH_FIELD_NAME;
+
+class DeeplyNestedChildDocTransformer extends DocTransformer {
+
+  private final String name;
+  protected final SchemaField idField;
+  protected final SolrQueryRequest req;
+  protected final IndexSchema schema;
+  private BitSetProducer parentsFilter;
+  protected int limit;
+  private final static Sort docKeySort = new Sort(new SortField(null, 
SortField.Type.DOC, false));
+  private Query childFilterQuery;
+
+  public DeeplyNestedChildDocTransformer(String name, final BitSetProducer 
parentsFilter,
+ final SolrQueryRequest req, final 
Query childFilterQuery, int limit) {
+this.name = name;
+this.schema = req.getSchema();
+this.idField = this.schema.getUniqueKeyField();
+this.req = req;
+this.parentsFilter = parentsFilter;
+this.limit = limit;
+this.childFilterQuery = childFilterQuery;
+  }
+
+  @Override
+  public String getName()  {
+return name;
+  }
+
+  @Override
+  public String[] getExtraRequestFields() {
+// we always need the idField (of the parent) in order to fill out 
it's children
+return new String[] { idField.getName() };
+  }
+
+  @Override
+  public void transform(SolrDocument rootDoc, int rootDocId) {
+
+FieldType idFt = idField.getType();
+
+String rootIdExt = 
getSolrFieldString(rootDoc.getFirstValue(idField.getName()), idFt);
+
+try {
+  Query parentQuery = idFt.getFieldQuery(null, idField, rootIdExt);
+  Query query = new ToChildBlockJoinQuery(parentQuery, parentsFilter);
+  SolrIndexSearcher searcher = context.getSearcher();
+  DocList children = searcher.getDocList(query, childFilterQuery, 
docKeySort, 0, limit);
+  long segAndId = searcher.lookupId(new BytesRef(rootIdExt));
+  final int seg = (int) (segAndId >> 32);
+  final LeafReaderContext leafReaderContext = 
searcher.getIndexReader().leaves().get(seg);
+  final SortedDocValues 

[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205477315
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/DeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.response.transform;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexableField;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.SortedDocValues;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.SortField;
+import org.apache.lucene.search.join.BitSetProducer;
+import org.apache.lucene.search.join.ToChildBlockJoinQuery;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.DocsStreamer;
+import org.apache.solr.schema.FieldType;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.schema.SchemaField;
+import org.apache.solr.search.DocIterator;
+import org.apache.solr.search.DocList;
+import org.apache.solr.search.SolrDocumentFetcher;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.SolrReturnFields;
+
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.NUM_SEP_CHAR;
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.PATH_SEP_CHAR;
+import static org.apache.solr.schema.IndexSchema.NEST_PATH_FIELD_NAME;
+
+class DeeplyNestedChildDocTransformer extends DocTransformer {
+
+  private final String name;
+  protected final SchemaField idField;
+  protected final SolrQueryRequest req;
+  protected final IndexSchema schema;
+  private BitSetProducer parentsFilter;
+  protected int limit;
+  private final static Sort docKeySort = new Sort(new SortField(null, 
SortField.Type.DOC, false));
+  private Query childFilterQuery;
+
+  public DeeplyNestedChildDocTransformer(String name, final BitSetProducer 
parentsFilter,
+ final SolrQueryRequest req, final 
Query childFilterQuery, int limit) {
+this.name = name;
+this.schema = req.getSchema();
+this.idField = this.schema.getUniqueKeyField();
+this.req = req;
+this.parentsFilter = parentsFilter;
+this.limit = limit;
+this.childFilterQuery = childFilterQuery;
+  }
+
+  @Override
+  public String getName()  {
+return name;
+  }
+
+  @Override
+  public String[] getExtraRequestFields() {
+// we always need the idField (of the parent) in order to fill out 
it's children
+return new String[] { idField.getName() };
+  }
+
+  @Override
+  public void transform(SolrDocument rootDoc, int rootDocId) {
+
+FieldType idFt = idField.getType();
+
+String rootIdExt = 
getSolrFieldString(rootDoc.getFirstValue(idField.getName()), idFt);
+
+try {
+  Query parentQuery = idFt.getFieldQuery(null, idField, rootIdExt);
+  Query query = new ToChildBlockJoinQuery(parentQuery, parentsFilter);
+  SolrIndexSearcher searcher = context.getSearcher();
+  DocList children = searcher.getDocList(query, childFilterQuery, 
docKeySort, 0, limit);
+  long segAndId = searcher.lookupId(new BytesRef(rootIdExt));
+  final int seg = (int) (segAndId >> 32);
+  final LeafReaderContext leafReaderContext = 
searcher.getIndexReader().leaves().get(seg);
+  final SortedDocValues 

[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205465998
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/ChildDocTransformerFactory.java
 ---
@@ -91,15 +100,37 @@ public DocTransformer create(String field, SolrParams 
params, SolrQueryRequest r
 
 Query childFilterQuery = null;
 if(childFilter != null) {
-  try {
-childFilterQuery = QParser.getParser( childFilter, req).getQuery();
-  } catch (SyntaxError syntaxError) {
-throw new SolrException( ErrorCode.BAD_REQUEST, "Failed to create 
correct child filter query" );
+  if(buildHierarchy) {
+childFilter = buildHierarchyChildFilterString(childFilter);
+return new DeeplyNestedChildDocTransformer(field, parentsFilter, 
req,
+getChildQuery(childFilter, req), limit);
   }
+  childFilterQuery = getChildQuery(childFilter, req);
+} else if(buildHierarchy) {
+  return new DeeplyNestedChildDocTransformer(field, parentsFilter, 
req, null, limit);
 }
 
 return new ChildDocTransformer( field, parentsFilter, uniqueKeyField, 
req.getSchema(), childFilterQuery, limit);
   }
+
+  private static Query getChildQuery(String childFilter, SolrQueryRequest 
req) {
+try {
+  return QParser.getParser( childFilter, req).getQuery();
+} catch (SyntaxError syntaxError) {
+  throw new SolrException( ErrorCode.BAD_REQUEST, "Failed to create 
correct child filter query" );
+}
+  }
+
+  protected static String buildHierarchyChildFilterString(String 
queryString) {
--- End diff --

Remember to provide input/output example.  I think this is where the 
PathHierarchyTokenizer might come into play... and our discussions on the JIRA 
issue about that hierarchy.  Can we table this for now and do in a follow-up 
issue?  (i.e. have no special syntax right now).  I'm just concerned the scope 
of this may be bigger than limited to this doc transformer since presumably 
users will want to do join queries using this syntax as well.  And this touches 
on how we index this; which is kinda a bigger discussion than all the stuff 
going on already in this issue.  And this'll need to be documented in the Solr 
Ref Guide well.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205465327
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/ChildDocTransformerFactory.java
 ---
@@ -70,36 +86,62 @@ public DocTransformer create(String field, SolrParams 
params, SolrQueryRequest r
 }
 
 String parentFilter = params.get( "parentFilter" );
-if( parentFilter == null ) {
-  throw new SolrException( ErrorCode.BAD_REQUEST, "Parent filter 
should be sent as parentFilter=filterCondition" );
+BitSetProducer parentsFilter = null;
+boolean buildHierarchy = params.getBool("hierarchy", false);
+if( parentFilter == null) {
+  if(!buildHierarchy) {
+throw new SolrException( ErrorCode.BAD_REQUEST, "Parent filter 
should be sent as parentFilter=filterCondition" );
+  }
+  parentsFilter = new QueryBitSetProducer(rootFilter);
+} else {
+  try {
+Query parentFilterQuery = QParser.getParser(parentFilter, 
req).getQuery();
+//TODO shouldn't we try to use the Solr filter cache, and then 
ideally implement
+//  BitSetProducer over that?
+// DocSet parentDocSet = 
req.getSearcher().getDocSet(parentFilterQuery);
+// then return BitSetProducer with custom BitSet impl accessing 
the docSet
+parentsFilter = new QueryBitSetProducer(parentFilterQuery);
+  } catch (SyntaxError syntaxError) {
+throw new SolrException( ErrorCode.BAD_REQUEST, "Failed to create 
correct parent filter query" );
+  }
 }
 
 String childFilter = params.get( "childFilter" );
 int limit = params.getInt( "limit", 10 );
 
-BitSetProducer parentsFilter = null;
-try {
-  Query parentFilterQuery = QParser.getParser( parentFilter, 
req).getQuery();
-  //TODO shouldn't we try to use the Solr filter cache, and then 
ideally implement
-  //  BitSetProducer over that?
-  // DocSet parentDocSet = 
req.getSearcher().getDocSet(parentFilterQuery);
-  // then return BitSetProducer with custom BitSet impl accessing the 
docSet
-  parentsFilter = new QueryBitSetProducer(parentFilterQuery);
-} catch (SyntaxError syntaxError) {
-  throw new SolrException( ErrorCode.BAD_REQUEST, "Failed to create 
correct parent filter query" );
-}
-
 Query childFilterQuery = null;
 if(childFilter != null) {
--- End diff --

The code flow from here to the end of the method looks very awkward to me.  
I think the top "if" condition should test for buildHierarchy that we the 
nested and non-nested cases are clearly separated.  Do you think that would be 
clear?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205463666
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/ChildDocTransformerFactory.java
 ---
@@ -61,6 +71,12 @@
  */
 public class ChildDocTransformerFactory extends TransformerFactory {
 
+  public static final String PATH_SEP_CHAR = "/";
+  public static final String NUM_SEP_CHAR = "#";
+  private static final BooleanQuery rootFilter = new BooleanQuery.Builder()
+  .add(new BooleanClause(new MatchAllDocsQuery(), 
BooleanClause.Occur.MUST))
+  .add(new BooleanClause(new WildcardQuery(new 
Term(NEST_PATH_FIELD_NAME, new BytesRef("*"))), 
BooleanClause.Occur.MUST_NOT)).build();
--- End diff --

Remember again to use DocValuesExistsQuery


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205467807
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/DeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,224 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.response.transform;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Set;
+
+import com.google.common.collect.ArrayListMultimap;
+import com.google.common.collect.Multimap;
+import org.apache.lucene.index.DocValues;
+import org.apache.lucene.index.IndexableField;
+import org.apache.lucene.index.LeafReaderContext;
+import org.apache.lucene.index.SortedDocValues;
+import org.apache.lucene.search.Query;
+import org.apache.lucene.search.Sort;
+import org.apache.lucene.search.SortField;
+import org.apache.lucene.search.join.BitSetProducer;
+import org.apache.lucene.search.join.ToChildBlockJoinQuery;
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.common.SolrDocument;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.DocsStreamer;
+import org.apache.solr.schema.FieldType;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.schema.SchemaField;
+import org.apache.solr.search.DocIterator;
+import org.apache.solr.search.DocList;
+import org.apache.solr.search.SolrDocumentFetcher;
+import org.apache.solr.search.SolrIndexSearcher;
+import org.apache.solr.search.SolrReturnFields;
+
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.NUM_SEP_CHAR;
+import static 
org.apache.solr.response.transform.ChildDocTransformerFactory.PATH_SEP_CHAR;
+import static org.apache.solr.schema.IndexSchema.NEST_PATH_FIELD_NAME;
+
+class DeeplyNestedChildDocTransformer extends DocTransformer {
+
+  private final String name;
+  protected final SchemaField idField;
+  protected final SolrQueryRequest req;
+  protected final IndexSchema schema;
+  private BitSetProducer parentsFilter;
+  protected int limit;
+  private final static Sort docKeySort = new Sort(new SortField(null, 
SortField.Type.DOC, false));
+  private Query childFilterQuery;
+
+  public DeeplyNestedChildDocTransformer(String name, final BitSetProducer 
parentsFilter,
+ final SolrQueryRequest req, final 
Query childFilterQuery, int limit) {
+this.name = name;
+this.schema = req.getSchema();
+this.idField = this.schema.getUniqueKeyField();
+this.req = req;
+this.parentsFilter = parentsFilter;
+this.limit = limit;
+this.childFilterQuery = childFilterQuery;
+  }
+
+  @Override
+  public String getName()  {
+return name;
+  }
+
+  @Override
+  public String[] getExtraRequestFields() {
+// we always need the idField (of the parent) in order to fill out 
it's children
+return new String[] { idField.getName() };
--- End diff --

Oh?  I didn't know we cared at all what the ID is.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: SynonymGraphFilter followed by StopFilter

2018-07-26 Thread Walter Underwood
Move the synonym filter to the index analyzer chain. That provides better 
performance and avoids some surprising relevance behavior. With synonyms at 
query time, you’ll see different idf for terms in the synonym set, with the 
rare variant scoring higher. That is probably the opposite of what is expected.

Also, phrase synonyms just don’t work at query time because the terms are 
parsed into individual tokens by the query parser, not the tokenizer.

Don’t use stop words. Just remove that line. Removing stop words is a 
performance and space hack that was useful in the 1960’s, but causes problems 
now. I’ve never used stop word removal and I started in search with Infoseek in 
1996. Stop word removal is like a binary idf, ignoring common words. Since we 
have idf, we can give a lower score to common words and keep them in the index. 

Do those two things and it should work as you expect. 

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jul 26, 2018, at 3:23 AM, Andrea Gazzarini  wrote:
> 
> Hi Alan, thanks for the response and thank you very much for the pointers
> 
> On 26/07/18 12:16, Alan Woodward wrote:
>> Hi Andrea,
>> 
>> This is a long-standing issue: see 
>> https://issues.apache.org/jira/browse/LUCENE-4065 
>>  and 
>> https://issues.apache.org/jira/browse/LUCENE-8250 
>>  for discussion.  I don’t 
>> think we’ve reached a consensus on how to fix it yet, but more examples are 
>> good.
>> 
>> Unfortunately I don’t think changing the StopFilter to ignore SYNONYM tokens 
>> will work, because then you’ll generate queries that always fail - they’ll 
>> search for ‘of’ in the middle of the phrase, but ‘of’ never gets indexed 
>> because it’s removed by the StopFilter at index time.
>> 
>> - Alan
>> 
>>> On 26 Jul 2018, at 08:04, Andrea Gazzarini >> > wrote:
>>> 
>>> Hi, 
>>> I have the following field type definition: 
>>> >> autoGeneratePhraseQueries="true">
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> >> synonyms="synonyms.txt" ignoreCase="false" expand="true"/>
>>> >> ignoreCase="false"/>
>>> 
>>> 
>>> Where synonyms and stopwords are defined as follows: 
>>> 
>>> synonyms = out of warranty,oow
>>> stopwords = of
>>> 
>>> Running the following query:
>>> 
>>> q=my tv went out of warranty something of
>>> 
>>> I get wrong results, with the following explain: 
>>> 
>>> title:my title:tv title:went (title:oow PhraseQuery(title:"out ? warranty 
>>> something"))
>>> 
>>> That is, the synonyms is correctly detected, I see the graph information 
>>> are correctly reported in the positionLength, it seems they are wrongly 
>>> interpreted by the QueryParser. 
>>> I guess the reason is the "of" removal operated by the StopFilter, which 
>>> removes the "of" term within the phrase (I wouldn't want that)
>>> creates a "hole" in the span defined by the "oow" term, which has been 
>>> marked as a synonym with a positionLength = 3, therefore including the next 
>>> available term (something). 
>>> I tried to change the StopFilter in order to ignore stopwords that are 
>>> marked as SYNONYM or that are part of a previous synonym span, and it 
>>> works: it correctly produces the following query: 
>>> 
>>> title:my title:tv title:went (title:oow PhraseQuery(title:"out of 
>>> warranty")) title:something
>>> 
>>> So I'd like to ask your opinion about this. Am I missing something? Do you 
>>> think it's better to open a JIRA issue? If the solution is a graph aware 
>>> stop filter, do you think it's better to change the existing filter or to 
>>> subclass it?
>>> 
>>> Best, 
>>> Andrea
>>> 
>>> 
>> 
> 



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 22530 - Still Unstable!

2018-07-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22530/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild

Error Message:
junit.framework.AssertionFailedError: Unexpected wrapped exception type, 
expected CoreIsClosedException

Stack Trace:
java.util.concurrent.ExecutionException: junit.framework.AssertionFailedError: 
Unexpected wrapped exception type, expected CoreIsClosedException
at 
__randomizedtesting.SeedInfo.seed([353464FDF18EAB01:EAB90642CFE7FE63]:0)
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at 
org.apache.solr.handler.component.InfixSuggestersTest.testShutdownDuringBuild(InfixSuggestersTest.java:130)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: junit.framework.AssertionFailedError: 

[jira] [Commented] (LUCENE-8204) ReqOptSumScorer should leverage sub scorers' per-block max scores

2018-07-26 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558370#comment-16558370
 ] 

Jim Ferenczi commented on LUCENE-8204:
--

{quote}
Could we somehow merge optIsRequiredBlock and optIsRequiredSegment to have 
fewer variables to take care of? For instance could we somehow set 
upTo=NO_MORE_DOCS so that optIsRequiredBlock=true's effect lasts til the end of 
the segment instead of optIsRequiredSegment?
{quote}

I've done that in my first attempt but the benchmark showed no improvement for 
the HighHigh case. The current patch can skip blocks even when the disjunction 
is required on the entire segment so setting upTo to NO_MORE_DOCS would disable 
this optim. 

{quote}
advanceTarget does target = reqApproximation.advance(upTo + 1) and then 
moveToNextBlock(target). Should we just do target = upTo+1 to avoid reading 
postings? There might not be any matches in the next block and calling 
advance() forces the postings reader to decompress the block, while I would 
expect advanceTarget() to only advance the target based on impacts?
{quote}

I didn't know what to do here so I choose to use advance but I agree that 
advanceTarget should only use impacts. I tested this change and it improves the 
benchmark by a nice margin (nice call ;) ):
{noformat}
TaskQPS lucene_baseline  StdDevQPS lucene_candidate  StdDev 
   Pct diff
HighMed   48.81  (0.0%)   52.29  (0.0%)7.1% (   7% 
-7%)
HighHigh   14.47  (0.0%)  23.82  (0.0%)   64.6% (  64% 
-   64%)
HighLow  132.44  (0.0%)  312.50  (0.0%)  135.9% ( 135% 
-  135%)
{noformat}
I'll modify the patch with this change.

{quote}
advanceShallow should check that optScorer.docID() is less than or equal to 
target before calling advanceShallow on it?
{quote}

I didn't touch this part but I agree that it looks buggy. I'll add some tests 
to stress the case where this scorer is shallow advanced (inside an inner 
clause).

> ReqOptSumScorer should leverage sub scorers' per-block max scores
> -
>
> Key: LUCENE-8204
> URL: https://issues.apache.org/jira/browse/LUCENE-8204
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8204.patch
>
>
> Currently it only looks at max scores on the entire segment. Given that 
> per-block max scores usually give lower upper bounds of the score, this 
> should help.
> This is especially important for LUCENE-8197 to work well since the main 
> query would typically be added as a MUST clauses of a boolean query while the 
> query that scores on features would be a SHOULD clause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8204) ReqOptSumScorer should leverage sub scorers' per-block max scores

2018-07-26 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558269#comment-16558269
 ] 

Adrien Grand commented on LUCENE-8204:
--

The benchmark numbers look great! Some comments on the patch:
 *  Could we somehow merge optIsRequiredBlock and optIsRequiredSegment to have 
fewer variables to take care of? For instance could we somehow set 
upTo=NO_MORE_DOCS so that optIsRequiredBlock=true's effect lasts til the end of 
the segment instead of optIsRequiredSegment?
 * advanceTarget does {{target = reqApproximation.advance(upTo + 1)}} and then 
{{moveToNextBlock(target)}}. Should we just do {{target = upTo+1}} to avoid 
reading postings? There might not be any matches in the next block and calling 
advance() forces the postings reader to decompress the block, while I would 
expect advanceTarget() to only advance the target based on impacts?
 * advanceShallow should check that optScorer.docID() is less than or equal to 
target before calling advanceShallow on it?

> ReqOptSumScorer should leverage sub scorers' per-block max scores
> -
>
> Key: LUCENE-8204
> URL: https://issues.apache.org/jira/browse/LUCENE-8204
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8204.patch
>
>
> Currently it only looks at max scores on the entire segment. Given that 
> per-block max scores usually give lower upper bounds of the score, this 
> should help.
> This is especially important for LUCENE-8197 to work well since the main 
> query would typically be added as a MUST clauses of a boolean query while the 
> query that scores on features would be a SHOULD clause.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12595) CloudSolrClient.Builder should accept a zkHost connection string

2018-07-26 Thread Vilius Pranckaitis (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558264#comment-16558264
 ] 

Vilius Pranckaitis commented on SOLR-12595:
---

There's a workaround, but you need to call at least one {{@Deprecated}} method: 
you could construct {{ZkClientClusterStateProvider}} yourself and pass it using 
[{{withClusterStateProvider()}}|https://github.com/apache/lucene-solr/blob/d87ea6b1ccd28e0dd8e30565fe95b2e0a31f82e8/solr/solrj/src/java/org/apache/solr/client/solrj/impl/CloudSolrClient.java#L1556-L1559]
 method.

I'm volunteering to implement this new constructor.

> CloudSolrClient.Builder should accept a zkHost connection string
> 
>
> Key: SOLR-12595
> URL: https://issues.apache.org/jira/browse/SOLR-12595
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Vilius Pranckaitis
>Priority: Minor
>
> SOLR-11629 improved {{CloudSolrClient.Builder}} workflow by adding two new 
> constructors:
> {code:java}
> 1.   public Builder(List solrUrls) {
> 2.   public Builder(List zkHosts, Optional zkChroot) {
> {code}
> It is not unusual to format ZooKeeper connection details as a single string 
> (e.g. {{zk1:2181,zk2:2181/some_chroot}}). However, these new constructors 
> make it difficult to use such connection strings.
> It would be fairly simple to add a third constructor which would accept a 
> connection string:
> {code:java}
> 3.   public Builder(String zkHost) {
> {code}
> {{CloudSolrClient.Builder}} uses ZooKeeper details to construct a 
> {{ZkClientClusterStateProvider}}, which [already supports ZK connection 
> strings|https://github.com/apache/lucene-solr/blob/d87ea6b1ccd28e0dd8e30565fe95b2e0a31f82e8/solr/solrj/src/java/org/apache/solr/client/solrj/impl/ZkClientClusterStateProvider.java#L57-L59].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Synonyms + autoGeneratePhraseQueries

2018-07-26 Thread Andrea Gazzarini

Hi, still fighting with synonyms, I have another question.

I'm not understanding the role, and the effect, of the 
"autoGeneratePhraseQueries" attribute in a synonym context.

I mean, if I have the following field type:

autoGeneratePhraseQueries="true">

   
   
   
   
   
   
   
   ignoreCase="false" expand="true"/>

   


with the following synonym: *out of warranty,oow*

with the following query: *q=out of warranty*

The output query is exactly what I would expect: *(title:oow 
PhraseQuery(title:"out of warranty"))*


Setting the autoGeneratePhraseQueries to *false* (or better, forgetting 
the attribute declaration at all), the output query is:


*(title:oow (+title:out +title:of +title:warranty))*

Which matches things like "I had to step out for renewing the warranty 
of my device".


This, at first glance sounds to me completely wrong. Or, better, I'm not 
able to imagine a use case where that synonym decomposition could be 
useful. Is that wanted? I would say that the query parser should always 
generates a phrase query for multi-term synonyms, like in the first 
example (i.e. autoGeneratePhraseQueries=true).


Thanks in advance,
Andrea


[jira] [Created] (SOLR-12595) CloudSolrClient.Builder should accept a zkHost connection string

2018-07-26 Thread Vilius Pranckaitis (JIRA)
Vilius Pranckaitis created SOLR-12595:
-

 Summary: CloudSolrClient.Builder should accept a zkHost connection 
string
 Key: SOLR-12595
 URL: https://issues.apache.org/jira/browse/SOLR-12595
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Vilius Pranckaitis


SOLR-11629 improved {{CloudSolrClient.Builder}} workflow by adding two new 
constructors:
{code:java}
1.   public Builder(List solrUrls) {
2.   public Builder(List zkHosts, Optional zkChroot) {
{code}
It is not unusual to format ZooKeeper connection details as a single string 
(e.g. {{zk1:2181,zk2:2181/some_chroot}}). However, these new constructors make 
it difficult to use such connection strings.

It would be fairly simple to add a third constructor which would accept a 
connection string:
{code:java}
3.   public Builder(String zkHost) {
{code}
{{CloudSolrClient.Builder}} uses ZooKeeper details to construct a 
{{ZkClientClusterStateProvider}}, which [already supports ZK connection 
strings|https://github.com/apache/lucene-solr/blob/d87ea6b1ccd28e0dd8e30565fe95b2e0a31f82e8/solr/solrj/src/java/org/apache/solr/client/solrj/impl/ZkClientClusterStateProvider.java#L57-L59].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: query other solr collection from within a solr plugin

2018-07-26 Thread Mikhail Khludnev
[subquery] calls remote cloud collections if collection parameter (which is
somewhat not well known, documented) is supplied
https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/response/transform/SubQueryAugmenterFactory.java#L334


On Thu, Jul 26, 2018 at 3:05 PM Nicolas Franck 
wrote:

> I'm writing a solr plugin in java that has to query another solr
> collection to gather
> information. What is the best way to do this?
>
> For now I'm just using a SolrClient ( CloudSolrClient ), but has several
> disadvantages:
>
> * you have to extract from core metadata where your server resides, and
> setup your SolrClient accordingly.
> * you are just knocking at the same door
> * search has to go over http for the same core.
>
> Is there a better way? Are there any examples?
>
> Thanks in advance
>
> Nicolas Franck
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

-- 
Sincerely yours
Mikhail Khludnev


Re: query other solr collection from within a solr plugin

2018-07-26 Thread Upayavira
Go look in the source for the Join query parser. It does this.

Upayavira

On Thu, 26 Jul 2018, at 1:04 PM, Nicolas Franck wrote:
> I'm writing a solr plugin in java that has to query another solr 
> collection to gather
> information. What is the best way to do this?
> 
> For now I'm just using a SolrClient ( CloudSolrClient ), but has several 
> disadvantages:
> 
> * you have to extract from core metadata where your server resides, and 
> setup your SolrClient accordingly.
> * you are just knocking at the same door
> * search has to go over http for the same core.
> 
> Is there a better way? Are there any examples?
> 
> Thanks in advance
> 
> Nicolas Franck
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



query other solr collection from within a solr plugin

2018-07-26 Thread Nicolas Franck
I'm writing a solr plugin in java that has to query another solr collection to 
gather
information. What is the best way to do this?

For now I'm just using a SolrClient ( CloudSolrClient ), but has several 
disadvantages:

* you have to extract from core metadata where your server resides, and setup 
your SolrClient accordingly.
* you are just knocking at the same door
* search has to go over http for the same core.

Is there a better way? Are there any examples?

Thanks in advance

Nicolas Franck
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 22529 - Unstable!

2018-07-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22529/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:33345/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:34293/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:33345/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:34293/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([28418C4A19F89BD7:828C5FB8AE2B4E07]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:290)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
  

[jira] [Commented] (SOLR-12536) Enhance autoscaling policy to equally distribute replicas on the basis of arbitrary properties

2018-07-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558212#comment-16558212
 ] 

ASF subversion and git services commented on SOLR-12536:


Commit 28fc0e19503106e00415ff67ca04e055a9901cc2 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=28fc0e1 ]

SOLR-12536: autoscaling policy support to equally distribute replicas on the 
basis of arbitrary properties


> Enhance autoscaling policy to equally distribute replicas on the basis of 
> arbitrary properties
> --
>
> Key: SOLR-12536
> URL: https://issues.apache.org/jira/browse/SOLR-12536
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> *example:1*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : "#EACH" }
> //if the ports are "8983", "7574", "7575", the above rule is equivalent to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : ["8983", "7574", 
> "7575"]}{code}
> *case 1*: {{numShards =2, replicationFactor=3}} . In this case all the nodes 
> are divided into 3 buckets each containing nodes in that port. Each bucket 
> must contain {{3 * 2 /3 =2}} replicas
>  
> *example : 2*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : "#EACH" }
> //if the zones are "east_1", "east_2", "west_1", the above rule is equivalent 
> to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : ["east_1", 
> "east_2", "west_1"]}{code}
> The behavior is similar to example 1 , just that in this case we apply it to 
> a system property



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12536) Enhance autoscaling policy to equally distribute replicas on the basis of arbitrary properties

2018-07-26 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558210#comment-16558210
 ] 

ASF subversion and git services commented on SOLR-12536:


Commit d87ea6b1ccd28e0dd8e30565fe95b2e0a31f82e8 in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d87ea6b ]

SOLR-12536: autoscaling policy support to equally distribute replicas on the 
basis of arbitrary properties


> Enhance autoscaling policy to equally distribute replicas on the basis of 
> arbitrary properties
> --
>
> Key: SOLR-12536
> URL: https://issues.apache.org/jira/browse/SOLR-12536
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> *example:1*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : "#EACH" }
> //if the ports are "8983", "7574", "7575", the above rule is equivalent to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "port" : ["8983", "7574", 
> "7575"]}{code}
> *case 1*: {{numShards =2, replicationFactor=3}} . In this case all the nodes 
> are divided into 3 buckets each containing nodes in that port. Each bucket 
> must contain {{3 * 2 /3 =2}} replicas
>  
> *example : 2*
> {code:java}
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : "#EACH" }
> //if the zones are "east_1", "east_2", "west_1", the above rule is equivalent 
> to
> {"replica" : "#EQUAL"  , "shard" : "#EACH" , "sysprop.zone" : ["east_1", 
> "east_2", "west_1"]}{code}
> The behavior is similar to example 1 , just that in this case we apply it to 
> a system property



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 269 - Still Failing

2018-07-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/269/

No tests ran.

Build Log:
[...truncated 23039 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2231 links (1786 relative) to 3004 anchors in 229 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.5.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:

Re: SynonymGraphFilter followed by StopFilter

2018-07-26 Thread Andrea Gazzarini

Hi Alan, thanks for the response and thank you very much for the pointers


On 26/07/18 12:16, Alan Woodward wrote:

Hi Andrea,

This is a long-standing issue: see 
https://issues.apache.org/jira/browse/LUCENE-4065 and 
https://issues.apache.org/jira/browse/LUCENE-8250 for discussion.  I 
don’t think we’ve reached a consensus on how to fix it yet, but more 
examples are good.


Unfortunately I don’t think changing the StopFilter to ignore SYNONYM 
tokens will work, because then you’ll generate queries that always 
fail - they’ll search for ‘of’ in the middle of the phrase, but ‘of’ 
never gets indexed because it’s removed by the StopFilter at index time.


- Alan

On 26 Jul 2018, at 08:04, Andrea Gazzarini > wrote:


Hi,
I have the following field type definition:
autoGeneratePhraseQueries="true">

 
 
 
 
 
 
 
 synonyms="synonyms.txt" ignoreCase="false" expand="true"/>
 ignoreCase="false"/>

 

Where synonyms and stopwords are defined as follows:

synonyms = out of warranty,oow
stopwords = of

Running the following query:

q=my tv went out *of* warranty something *of*

I get wrong results, with the following explain:

title:my title:tv title:went (title:oow *PhraseQuery(title:"out ? 
warranty something"))*


That is, the synonyms is correctly detected, I see the graph 
information are correctly reported in the positionLength, it seems 
they are wrongly interpreted by the QueryParser.

I guess the reason is the "of" removal operated by the StopFilter, which

  * removes the "of" term within the phrase (I wouldn't want that)
  * creates a "hole" in the span defined by the "oow" term, which has
been marked as a synonym with a positionLength = 3, therefore
including the next available term (something).

I tried to change the StopFilter in order to ignore stopwords that 
are marked as SYNONYM or that are part of a previous synonym span, 
and it works: it correctly produces the following query:


title:my title:tv title:went *(title:oow PhraseQuery(title:"out of 
warranty"))* title:something


So I'd like to ask your opinion about this. Am I missing something? 
Do you think it's better to open a JIRA issue? If the solution is a 
graph aware stop filter, do you think it's better to change the 
existing filter or to subclass it?


Best,
Andrea








Re: SynonymGraphFilter followed by StopFilter

2018-07-26 Thread Alan Woodward
Hi Andrea,

This is a long-standing issue: see 
https://issues.apache.org/jira/browse/LUCENE-4065 
 and 
https://issues.apache.org/jira/browse/LUCENE-8250 
 for discussion.  I don’t 
think we’ve reached a consensus on how to fix it yet, but more examples are 
good.

Unfortunately I don’t think changing the StopFilter to ignore SYNONYM tokens 
will work, because then you’ll generate queries that always fail - they’ll 
search for ‘of’ in the middle of the phrase, but ‘of’ never gets indexed 
because it’s removed by the StopFilter at index time.

- Alan

> On 26 Jul 2018, at 08:04, Andrea Gazzarini  > wrote:
> 
> Hi, 
> I have the following field type definition: 
>  autoGeneratePhraseQueries="true">
> 
> 
> 
> 
> 
> 
> 
>  synonyms="synonyms.txt" ignoreCase="false" expand="true"/>
>  ignoreCase="false"/>
> 
> 
> Where synonyms and stopwords are defined as follows: 
> 
> synonyms = out of warranty,oow
> stopwords = of
> 
> Running the following query:
> 
> q=my tv went out of warranty something of
> 
> I get wrong results, with the following explain: 
> 
> title:my title:tv title:went (title:oow PhraseQuery(title:"out ? warranty 
> something"))
> 
> That is, the synonyms is correctly detected, I see the graph information are 
> correctly reported in the positionLength, it seems they are wrongly 
> interpreted by the QueryParser. 
> I guess the reason is the "of" removal operated by the StopFilter, which 
> removes the "of" term within the phrase (I wouldn't want that)
> creates a "hole" in the span defined by the "oow" term, which has been marked 
> as a synonym with a positionLength = 3, therefore including the next 
> available term (something). 
> I tried to change the StopFilter in order to ignore stopwords that are marked 
> as SYNONYM or that are part of a previous synonym span, and it works: it 
> correctly produces the following query: 
> 
> title:my title:tv title:went (title:oow PhraseQuery(title:"out of warranty")) 
> title:something
> 
> So I'd like to ask your opinion about this. Am I missing something? Do you 
> think it's better to open a JIRA issue? If the solution is a graph aware stop 
> filter, do you think it's better to change the existing filter or to subclass 
> it?
> 
> Best, 
> Andrea
> 
> 



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-07-26 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r205403130
  
--- Diff: 
solr/core/src/test/org/apache/solr/response/transform/TestDeeplyNestedChildDocTransformer.java
 ---
@@ -0,0 +1,163 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.response.transform;
+
+import java.util.concurrent.atomic.AtomicInteger;
+
+import org.apache.solr.SolrTestCaseJ4;
+import org.junit.After;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class TestDeeplyNestedChildDocTransformer extends SolrTestCaseJ4 {
+
+  private static AtomicInteger counter = new AtomicInteger();
+  private static final char PATH_SEP_CHAR = '/';
+  private static final String[] types = {"donut", "cake"};
+  private static final String[] ingredients = {"flour", "cocoa", 
"vanilla"};
+  private static final String[] names = {"Yaz", "Jazz", "Costa"};
+
+  @BeforeClass
+  public static void beforeClass() throws Exception {
+initCore("solrconfig-update-processor-chains.xml", "schema15.xml");
+  }
+
+  @After
+  public void after() throws Exception {
+assertU(delQ("*:*"));
+assertU(commit());
+  }
+
+  @Test
+  public void testParentFilterJSON() throws Exception {
+indexSampleData(10);
+String[] tests = new String[] {
--- End diff --

I have just updated this test, hopefully it is a lot better now.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8430) TopDocs.totalHits is not always the accurate hit count

2018-07-26 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8430:
-
Issue Type: Sub-task  (was: Improvement)
Parent: LUCENE-8060

> TopDocs.totalHits is not always the accurate hit count
> --
>
> Key: LUCENE-8430
> URL: https://issues.apache.org/jira/browse/LUCENE-8430
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8430.patch
>
>
> Sub task of LUCENE-8060. We should change TopDocs.totalHits so that users get 
> a compilation error, and the new field or documentation should make it clear 
> that this number is not always the accurate hit count, which is important if 
> we want to enable index sorting / WAND / impacts -related optimizations by 
> default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8430) TopDocs.totalHits is not always the accurate hit count

2018-07-26 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16558107#comment-16558107
 ] 

Adrien Grand commented on LUCENE-8430:
--

Here is a proposal which replaces TopDocs.totalHits with a new TotalHits object 
that is implemented like this:

{code}
/**
 * Description of the total number of hits of a query. The total hit count
 * can't generally be computed accurately without visiting all matches, which
 * is costly for queries that match lots of documents. Given that it is often
 * enough to have a lower bounds of the number of hits, such as
 * "there are more than 1000 hits", Lucene has options to stop counting as soon
 * as a threshold has been reached in order to improve query times.
 */
public final class TotalHits {

  /** How the {@link TotalHits#value} should be interpreted. */
  public enum Relation {
/**
 * The total hit count is equal to {@link TotalHits#value}.
 */
EQUAL_TO,
/**
 * The total hit count is greater than or equal to {@link TotalHits#value}.
 */
GREATER_THAN_OR_EQUAL_TO
  }

  /**
   * The value of the total hit count. Must be interpreted in the context of
   * {@link #relation}.
   */
  public final long value;

  /**
   * Whether {@link #value} is the exact hit count, in which case
   * {@link #relation} is equal to {@link Relation#EQUAL_TO}, or a lower bound
   * of the total hit count, in which case {@link #relation} is equal to
   * {@link Relation#GREATER_THAN_OR_EQUAL_TO}.
   */
  public final Relation relation;

  /** Sole constructor. */
  public TotalHits(long value, Relation relation) {
if (value < 0) {
  throw new IllegalArgumentException("value must be >= 0, got " + value);
}
this.value = value;
this.relation = Objects.requireNonNull(relation);
  }

  @Override
  public String toString() {
return value + (relation == Relation.EQUAL_TO ? "" : "+") + " hits";
  }

}
{code}

Also TopScoreDocCollector and TopFieldCollector have been changed to disable 
the extrapolation of the hit count based on the number of hits that were 
collected exactly, and instead return the number of collected hits as a hit 
count, and GREATER_THAN_OR_EQUAL_TO as a relation. TopDocs#merge makes sure to 
return GREATER_THAN_OR_EQUAL_TO as a relation if any of the merged TopDocs 
instance has a hit count that is a lower bound too. All other changes are just 
about fixing compilation.

This way, whether the hit count is accurate or not is explicit, and users won't 
fall into the trap of assuming a hit count is accurate when it is not when they 
upgrade to Lucene 8.

> TopDocs.totalHits is not always the accurate hit count
> --
>
> Key: LUCENE-8430
> URL: https://issues.apache.org/jira/browse/LUCENE-8430
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8430.patch
>
>
> Sub task of LUCENE-8060. We should change TopDocs.totalHits so that users get 
> a compilation error, and the new field or documentation should make it clear 
> that this number is not always the accurate hit count, which is important if 
> we want to enable index sorting / WAND / impacts -related optimizations by 
> default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8430) TopDocs.totalHits is not always the accurate hit count

2018-07-26 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8430:
-
Attachment: LUCENE-8430.patch

> TopDocs.totalHits is not always the accurate hit count
> --
>
> Key: LUCENE-8430
> URL: https://issues.apache.org/jira/browse/LUCENE-8430
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8430.patch
>
>
> Sub task of LUCENE-8060. We should change TopDocs.totalHits so that users get 
> a compilation error, and the new field or documentation should make it clear 
> that this number is not always the accurate hit count, which is important if 
> we want to enable index sorting / WAND / impacts -related optimizations by 
> default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8430) TopDocs.totalHits is not always the accurate hit count

2018-07-26 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8430:


 Summary: TopDocs.totalHits is not always the accurate hit count
 Key: LUCENE-8430
 URL: https://issues.apache.org/jira/browse/LUCENE-8430
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand


Sub task of LUCENE-8060. We should change TopDocs.totalHits so that users get a 
compilation error, and the new field or documentation should make it clear that 
this number is not always the accurate hit count, which is important if we want 
to enable index sorting / WAND / impacts -related optimizations by default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8429) DaciukMihovAutomatonBuilder needs protection against stack overflows

2018-07-26 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8429:


 Summary: DaciukMihovAutomatonBuilder needs protection against 
stack overflows
 Key: LUCENE-8429
 URL: https://issues.apache.org/jira/browse/LUCENE-8429
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand


The maximum level of recursion of this class is the maximum term length, which 
is not low enough to ensure it never fails with a stack overflow.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 700 - Still Unstable

2018-07-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/700/

4 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:43890/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:45988/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:43890/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:45988/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([1C03E7D0BA1D49FA:B6CE34220DCE9C2A]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:288)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2622 - Unstable

2018-07-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2622/

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:45647/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:47525/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:45647/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:47525/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([29469F98CBD33C0C:838B4C6A7C00E9DC]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:290)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

SynonymGraphFilter followed by StopFilter

2018-07-26 Thread Andrea Gazzarini

Hi,
I have the following field type definition:

autoGeneratePhraseQueries="true">








ignoreCase="false" expand="true"/>
ignoreCase="false"/>




Where synonyms and stopwords are defined as follows:

synonyms = out of warranty,oow
stopwords = of

Running the following query:

q=my tv went out *of* warranty something *of*

I get wrong results, with the following explain:

title:my title:tv title:went (title:oow *PhraseQuery(title:"out ? 
warranty something"))*


That is, the synonyms is correctly detected, I see the graph information 
are correctly reported in the positionLength, it seems they are wrongly 
interpreted by the QueryParser.

I guess the reason is the "of" removal operated by the StopFilter, which

 * removes the "of" term within the phrase (I wouldn't want that)
 * creates a "hole" in the span defined by the "oow" term, which has
   been marked as a synonym with a positionLength = 3, therefore
   including the next available term (something).

I tried to change the StopFilter in order to ignore stopwords that are 
marked as SYNONYM or that are part of a previous synonym span, and it 
works: it correctly produces the following query:


title:my title:tv title:went *(title:oow PhraseQuery(title:"out of 
warranty"))* title:something


So I'd like to ask your opinion about this. Am I missing something? Do 
you think it's better to open a JIRA issue? If the solution is a graph 
aware stop filter, do you think it's better to change the existing 
filter or to subclass it?


Best,
Andrea




[JENKINS-EA] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-11-ea+23) - Build # 70 - Still Unstable!

2018-07-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/70/
Java: 64bit/jdk-11-ea+23 -XX:+UseCompressedOops -XX:+UseParallelGC

34 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.handler.TestSQLHandler

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.handler.TestSQLHandler: 
1) Thread[id=5095, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler] at 
java.base@11-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestSQLHandler: 
   1) Thread[id=5095, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler]
at java.base@11-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@11-ea/java.lang.Thread.run(Thread.java:834)
at __randomizedtesting.SeedInfo.seed([D297954ADAD24FC7]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.handler.TestSQLHandler

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.handler.TestSQLHandler: 
1) Thread[id=5430, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler] at 
java.base@11-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestSQLHandler: 
   1) Thread[id=5430, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler]
at java.base@11-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@11-ea/java.lang.Thread.run(Thread.java:834)
at __randomizedtesting.SeedInfo.seed([D297954ADAD24FC7]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.handler.TestSQLHandler

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.handler.TestSQLHandler: 
1) Thread[id=8714, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler] at 
java.base@11-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestSQLHandler: 
   1) Thread[id=8714, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler]
at java.base@11-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@11-ea/java.lang.Thread.run(Thread.java:834)
at __randomizedtesting.SeedInfo.seed([D297954ADAD24FC7]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.handler.TestSQLHandler

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.handler.TestSQLHandler: 
1) Thread[id=9037, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler] at 
java.base@11-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestSQLHandler: 
   1) Thread[id=9037, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler]
at java.base@11-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@11-ea/java.lang.Thread.run(Thread.java:834)
at __randomizedtesting.SeedInfo.seed([D297954ADAD24FC7]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.handler.TestSQLHandler

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.handler.TestSQLHandler: 
1) Thread[id=4772, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestSQLHandler] at 
java.base@11-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestSQLHandler: 
   1) Thread[id=4772, 

[jira] [Commented] (LUCENE-8428) Allow configurable sentinels in PriorityQueue

2018-07-26 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16556971#comment-16556971
 ] 

Dawid Weiss commented on LUCENE-8428:
-

+1. Nicer than subclassing.

> Allow configurable sentinels in PriorityQueue
> -
>
> Key: LUCENE-8428
> URL: https://issues.apache.org/jira/browse/LUCENE-8428
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8428.patch
>
>
> This is a follow-up to SOLR-12587: Lucene's PriorityQueue API makes it 
> impossible to have a configurable sentinel object since the parent 
> constructor is called before a sub class has the opportunity to set anything 
> that helps create those sentinels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12587) Reuse Lucene's PriorityQueue for the ExportHandler

2018-07-26 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16556970#comment-16556970
 ] 

Varun Thacker commented on SOLR-12587:
--

Thanks Adrien! I worked on a prototype patch and i'll post in the on the lucene 
Jira tomorrow 

> Reuse Lucene's PriorityQueue for the ExportHandler
> --
>
> Key: SOLR-12587
> URL: https://issues.apache.org/jira/browse/SOLR-12587
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>  Labels: export-writer
> Attachments: SOLR-12587.patch
>
>
> We have a priority queue in Lucene  {{org.apache.lucene.utilPriorityQueue}} . 
> The Export Handler also implements a PriorityQueue 
> {{org.apache.solr.handler.export.PriorityQueue}} . Both are obviously very 
> similar with minor API differences. 
>  
> The aim here is to reuse Lucene's PQ and remove the Solr implementation. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12587) Reuse Lucene's PriorityQueue for the ExportHandler

2018-07-26 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16556968#comment-16556968
 ] 

Adrien Grand commented on SOLR-12587:
-

I opened LUCENE-8428 for discussion.

> Reuse Lucene's PriorityQueue for the ExportHandler
> --
>
> Key: SOLR-12587
> URL: https://issues.apache.org/jira/browse/SOLR-12587
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>  Labels: export-writer
> Attachments: SOLR-12587.patch
>
>
> We have a priority queue in Lucene  {{org.apache.lucene.utilPriorityQueue}} . 
> The Export Handler also implements a PriorityQueue 
> {{org.apache.solr.handler.export.PriorityQueue}} . Both are obviously very 
> similar with minor API differences. 
>  
> The aim here is to reuse Lucene's PQ and remove the Solr implementation. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8428) Allow configurable sentinels in PriorityQueue

2018-07-26 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8428:


 Summary: Allow configurable sentinels in PriorityQueue
 Key: LUCENE-8428
 URL: https://issues.apache.org/jira/browse/LUCENE-8428
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand


This is a follow-up to SOLR-12587: Lucene's PriorityQueue API makes it 
impossible to have a configurable sentinel object since the parent constructor 
is called before a sub class has the opportunity to set anything that helps 
create those sentinels. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10) - Build # 715 - Still Unstable!

2018-07-26 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/715/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseG1GC

6 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
found:2[index.20180726010936758, index.20180726010949111, index.properties, 
replication.properties, snapshot_metadata]

Stack Trace:
java.lang.AssertionError: found:2[index.20180726010936758, 
index.20180726010949111, index.properties, replication.properties, 
snapshot_metadata]
at 
__randomizedtesting.SeedInfo.seed([EFE9113E170B6DE2:344211F812230451]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:969)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:940)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:916)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-8369) Remove the spatial module as it is obsolete

2018-07-26 Thread Allen Wittenauer (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16556940#comment-16556940
 ] 

Allen Wittenauer commented on LUCENE-8369:
--

Hi everyone.  I was brought here by YETUS-645.  Since there were questions 
about what went wrong, I re-triggered the test run to see why it was blowing up 
since the log files had rolled off.

First, the good news: The patch applied fine.   Yetus definitely supports 
0-level patches such as generated by IntelliJ; special handling is there to try 
to determine what the appropriate patch level should be.  If Yetus couldn't 
apply the patch, it would have reported the fact explicitly. That's clearly not 
the case here.  

Now, the bad news: As [~steve_rowe] speculated, Yetus definitely stumbles a bit 
when modules are moved or deleted in a patch.  It has been a known issue for a 
while (YETUS-14 !).  I think it's been a low priority to fix since it doesn't 
happen that often in a lot of code bases.  If you folks need to make sure that 
works, let me know and I'll try to prioritize fixing it.

But the badder news:  the patch appears to have introduced 4 new javac warnings 
in lucene_spatial-extras. Don't let them get lost in that sea of red. ;)

 

> Remove the spatial module as it is obsolete
> ---
>
> Key: LUCENE-8369
> URL: https://issues.apache.org/jira/browse/LUCENE-8369
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8369.patch
>
>
> The "spatial" module is at this juncture nearly empty with only a couple 
> utilities that aren't used by anything in the entire codebase -- 
> GeoRelationUtils, and MortonEncoder.  Perhaps it should have been removed 
> earlier in LUCENE-7664 which was the removal of GeoPointField which was 
> essentially why the module existed.  Better late than never.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org