[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_121) - Build # 853 - Still unstable!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/853/
Java: 32bit/jdk1.8.0_121 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery

Error Message:
Expected a collection with one shard and two replicas null Last available 
state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node1":{   "core":"MissingSegmentRecoveryTest_shard1_replica1",   
"base_url":"http://127.0.0.1:61832/solr;,   
"node_name":"127.0.0.1:61832_solr",   "state":"active",   
"leader":"true"}, "core_node2":{   
"core":"MissingSegmentRecoveryTest_shard1_replica2",   
"base_url":"http://127.0.0.1:61837/solr;,   
"node_name":"127.0.0.1:61837_solr",   "state":"down",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected a collection with one shard and two replicas
null
Last available state: 
DocCollection(MissingSegmentRecoveryTest//collections/MissingSegmentRecoveryTest/state.json/8)={
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"MissingSegmentRecoveryTest_shard1_replica1",
  "base_url":"http://127.0.0.1:61832/solr;,
  "node_name":"127.0.0.1:61832_solr",
  "state":"active",
  "leader":"true"},
"core_node2":{
  "core":"MissingSegmentRecoveryTest_shard1_replica2",
  "base_url":"http://127.0.0.1:61837/solr;,
  "node_name":"127.0.0.1:61837_solr",
  "state":"down",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([80ED98C739D85121:D0B800C460F9E73C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:265)
at 
org.apache.solr.cloud.MissingSegmentRecoveryTest.testLeaderRecovery(MissingSegmentRecoveryTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_121) - Build # 19453 - Still Unstable!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19453/
Java: 32bit/jdk1.8.0_121 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.admin.StatsReloadRaceTest.testParallelReloadAndStats

Error Message:
Key SEARCHER.searcher.indexVersion not found in registry solr.core.collection1

Stack Trace:
java.lang.AssertionError: Key SEARCHER.searcher.indexVersion not found in 
registry solr.core.collection1
at 
__randomizedtesting.SeedInfo.seed([D5D98C58CE0B4DC6:1A47E96141FA2599]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.admin.StatsReloadRaceTest.requestMetrics(StatsReloadRaceTest.java:132)
at 
org.apache.solr.handler.admin.StatsReloadRaceTest.testParallelReloadAndStats(StatsReloadRaceTest.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 12867 lines...]
   [junit4] 

[jira] [Updated] (SOLR-10551) Add list and cell Streaming Expressions

2017-04-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10551:
--
Attachment: SOLR-10551.patch

> Add list and cell Streaming Expressions
> ---
>
> Key: SOLR-10551
> URL: https://issues.apache.org/jira/browse/SOLR-10551
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10551.patch, SOLR-10551.patch
>
>
> The *list* expression contains a list of streams. It opens, reads and closes 
> each stream sequentially. Sample syntax:
> {code}
> list(expr,expr, ...)
> {code}
> The *cell* expression gathers the tuples from a stream and flattens them into 
> a list in a *single* celled tuple. Sample syntax:
> {code}
> cell(name, expr)
> {code}
> In the tuple emitted by the cell expression the *name* param is the key which 
> points to the flattened list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10551) Add list and cell Streaming Expressions

2017-04-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10551:
--
Description: 
The *list* expression contains a list of streams. It opens, reads and closes 
each stream sequentially. Sample syntax:

{code}
list(expr,expr, ...)
{code}

The *cell* expression gathers the tuples from a stream and flattens them into a 
list in a *single* celled tuple. Sample syntax:

{code}
cell(name, expr)
{code}

In the tuple emitted by the cell expression the *name* param is the key which 
points to the flattened list.



  was:
The *list* expression contains a list of streams. It opens, reads and closes 
each stream sequentially. Sample syntax:

{code}
list(expr,expr, ...)
{code}

This *cell* expression gathers the tuples from a stream and flattens them into 
a list in a *single* celled tuple. Sample syntax:

{code}
cell(name, expr)
{code}

In the tuple emitted by the cell expression the *name* param is the key which 
points to the flattened list.




> Add list and cell Streaming Expressions
> ---
>
> Key: SOLR-10551
> URL: https://issues.apache.org/jira/browse/SOLR-10551
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10551.patch
>
>
> The *list* expression contains a list of streams. It opens, reads and closes 
> each stream sequentially. Sample syntax:
> {code}
> list(expr,expr, ...)
> {code}
> The *cell* expression gathers the tuples from a stream and flattens them into 
> a list in a *single* celled tuple. Sample syntax:
> {code}
> cell(name, expr)
> {code}
> In the tuple emitted by the cell expression the *name* param is the key which 
> points to the flattened list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10551) Add list and cell Streaming Expressions

2017-04-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10551:
--
Description: 
The *list* expression contains a list of streams. It opens, reads and closes 
each stream sequentially. Sample syntax:

{code}
list(expr,expr, ...)
{code}

This *cell* expression gathers the tuples from a stream and flattens them into 
a list in a *single* celled tuple. Sample syntax:

{code}
cell(name, expr)
{code}

In the tuple emitted by the cell expression the *name* param is the key which 
points to the flattened list.



  was:
The *list* expression contains a list of streams. It opens, reads and closes 
each stream sequentially. Sample syntax:

{code}
list(expr,expr, ...)
{code}

This *cell* expression gathers the tuples from a stream and flattens them into 
a list in a *single* celled tuple. Sample syntax:

{code}
cell(name, expr)
{code}

In the tuple, that the cell expression emits, the *name* param is the key which 
points to the flattened list.




> Add list and cell Streaming Expressions
> ---
>
> Key: SOLR-10551
> URL: https://issues.apache.org/jira/browse/SOLR-10551
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10551.patch
>
>
> The *list* expression contains a list of streams. It opens, reads and closes 
> each stream sequentially. Sample syntax:
> {code}
> list(expr,expr, ...)
> {code}
> This *cell* expression gathers the tuples from a stream and flattens them 
> into a list in a *single* celled tuple. Sample syntax:
> {code}
> cell(name, expr)
> {code}
> In the tuple emitted by the cell expression the *name* param is the key which 
> points to the flattened list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10551) Add list and cell Streaming Expressions

2017-04-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10551:
--
Description: 
The *list* expression contains a list of streams. It opens, reads and closes 
each stream sequentially. Sample syntax:

{code}
list(expr,expr, ...)
{code}

This *cell* expression gathers the tuples from a stream and flattens them into 
a list in a *single* celled tuple. Sample syntax:

{code}
cell(name, expr)
{code}

In the tuple, that the cell expression emits, the *name* param is the key which 
points to flattened list.



  was:
The *list* expression contains a list of streams. It opens, reads and closes 
each stream sequentially. Sample syntax:

{code}
list(expr,expr, ...)
{code}

This *cell* expression gathers the tuples from a stream and flattens them into 
a list in a *single* celled tuple. Sample syntax:

{code}
cell(name, expr)
{code}

The *name* param is the key in the tuple pointing to the list.




> Add list and cell Streaming Expressions
> ---
>
> Key: SOLR-10551
> URL: https://issues.apache.org/jira/browse/SOLR-10551
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10551.patch
>
>
> The *list* expression contains a list of streams. It opens, reads and closes 
> each stream sequentially. Sample syntax:
> {code}
> list(expr,expr, ...)
> {code}
> This *cell* expression gathers the tuples from a stream and flattens them 
> into a list in a *single* celled tuple. Sample syntax:
> {code}
> cell(name, expr)
> {code}
> In the tuple, that the cell expression emits, the *name* param is the key 
> which points to flattened list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10551) Add list and cell Streaming Expressions

2017-04-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10551:
--
Description: 
The *list* expression contains a list of streams. It opens, reads and closes 
each stream sequentially. Sample syntax:

{code}
list(expr,expr, ...)
{code}

This *cell* expression gathers the tuples from a stream and flattens them into 
a list in a *single* celled tuple. Sample syntax:

{code}
cell(name, expr)
{code}

In the tuple, that the cell expression emits, the *name* param is the key which 
points to the flattened list.



  was:
The *list* expression contains a list of streams. It opens, reads and closes 
each stream sequentially. Sample syntax:

{code}
list(expr,expr, ...)
{code}

This *cell* expression gathers the tuples from a stream and flattens them into 
a list in a *single* celled tuple. Sample syntax:

{code}
cell(name, expr)
{code}

In the tuple, that the cell expression emits, the *name* param is the key which 
points to flattened list.




> Add list and cell Streaming Expressions
> ---
>
> Key: SOLR-10551
> URL: https://issues.apache.org/jira/browse/SOLR-10551
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10551.patch
>
>
> The *list* expression contains a list of streams. It opens, reads and closes 
> each stream sequentially. Sample syntax:
> {code}
> list(expr,expr, ...)
> {code}
> This *cell* expression gathers the tuples from a stream and flattens them 
> into a list in a *single* celled tuple. Sample syntax:
> {code}
> cell(name, expr)
> {code}
> In the tuple, that the cell expression emits, the *name* param is the key 
> which points to the flattened list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10551) Add list and cell Streaming Expressions

2017-04-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10551:
--
Description: 
The *list* expression contains a list of streams. It opens, reads and closes 
each stream sequentially. Sample syntax:

{code}
list(expr,expr, ...)
{code}

This *cell* expression gathers the tuples from a stream and flattens them into 
a list in a *single* celled tuple. Sample syntax:

{code}
cell(name, expr)
{code}

The *name* param is the key in the tuple pointing to the list.



  was:
The *list* expression contains a list of streams. It opens, reads and closes 
each stream sequentially. Sample syntax:

{code}
list(expr,expr, ...)
{code}

This *cell* expression gathers the tuples from a stream and flattens them into 
a list in a *single* celled tuple. Sample syntax:

{code}
list(name, expr)
{code}

The *name* param is the key in the tuple pointing to the list.




> Add list and cell Streaming Expressions
> ---
>
> Key: SOLR-10551
> URL: https://issues.apache.org/jira/browse/SOLR-10551
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10551.patch
>
>
> The *list* expression contains a list of streams. It opens, reads and closes 
> each stream sequentially. Sample syntax:
> {code}
> list(expr,expr, ...)
> {code}
> This *cell* expression gathers the tuples from a stream and flattens them 
> into a list in a *single* celled tuple. Sample syntax:
> {code}
> cell(name, expr)
> {code}
> The *name* param is the key in the tuple pointing to the list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10551) Add list and cell Streaming Expressions

2017-04-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10551:
--
Description: 
The *list* expression contains a list of streams. It opens, reads and closes 
each stream sequentially. Sample syntax:

{code}
list(expr,expr, ...)
{code}

This *cell* expression gathers the tuples from a stream and flattens them into 
a list in a *single* celled tuple. Sample syntax:

{code}
list(name, expr)
{code}

The *name* param is the key in the tuple pointing to the list.



  was:
The *cat* expression concatenates streams. Sample syntax:

{code}
cat(expr,expr, ...)
{code}

This *list* expression gathers the tuples from a stream and flattens them into 
a list in a single tuple. Sample syntax:

{code}
list(name, expr)
{code}

The *name* param is the key in the tuple pointing to the list.




> Add list and cell Streaming Expressions
> ---
>
> Key: SOLR-10551
> URL: https://issues.apache.org/jira/browse/SOLR-10551
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10551.patch
>
>
> The *list* expression contains a list of streams. It opens, reads and closes 
> each stream sequentially. Sample syntax:
> {code}
> list(expr,expr, ...)
> {code}
> This *cell* expression gathers the tuples from a stream and flattens them 
> into a list in a *single* celled tuple. Sample syntax:
> {code}
> list(name, expr)
> {code}
> The *name* param is the key in the tuple pointing to the list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10551) Add list and cell Streaming Expressions

2017-04-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10551:
--
Summary: Add list and cell Streaming Expressions  (was: Add cat and list 
Streaming Expressions)

> Add list and cell Streaming Expressions
> ---
>
> Key: SOLR-10551
> URL: https://issues.apache.org/jira/browse/SOLR-10551
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10551.patch
>
>
> The *cat* expression concatenates streams. Sample syntax:
> {code}
> cat(expr,expr, ...)
> {code}
> This *list* expression gathers the tuples from a stream and flattens them 
> into a list in a single tuple. Sample syntax:
> {code}
> list(name, expr)
> {code}
> The *name* param is the key in the tuple pointing to the list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_121) - Build # 19452 - Unstable!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19452/
Java: 32bit/jdk1.8.0_121 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SolrCloudExampleTest

Error Message:
ObjectTracker found 5 object(s) that were not released!!! 
[MockDirectoryWrapper, MDCAwareThreadPoolExecutor, MockDirectoryWrapper, 
MockDirectoryWrapper, TransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:347)
  at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:360)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:720)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:947)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:854)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:958)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:893)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:88)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:370)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:388)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:178)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:747)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:728)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:509)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:355)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:306)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1699)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:395)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
 at org.eclipse.jetty.server.Server.handle(Server.java:534)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)  at 
org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:202)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)  at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
 at java.lang.Thread.run(Thread.java:745)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.SolrCore.reload(SolrCore.java:647)  at 
org.apache.solr.core.CoreContainer.reload(CoreContainer.java:1154)  at 
org.apache.solr.core.SolrCore.lambda$getConfListener$18(SolrCore.java:2949)  at 
org.apache.solr.handler.SolrConfigHandler$Command.lambda$handleGET$0(SolrConfigHandler.java:225)
  at java.lang.Thread.run(Thread.java:745)  

[jira] [Updated] (SOLR-10310) By default, stop splitting on whitespace prior to analysis in edismax and "Lucene"/standard query parsers

2017-04-21 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-10310:
--
Attachment: SOLR-10310.patch

Patch switching default {{sow}} to {{false}}.

All Solr tests pass, and precommit passes.

I think it's ready to go, but I'll wait a few days before committing in case 
there are objections.

Two behavior changes result from this switch, as illustrated by tests:

1. When {{sow=false}}, {{autoGeneratePhraseQueries="true"}}, and words are 
split (e.g. by WordDelimiterGraphFilter) but no overlapping terms are produced, 
phrase queries are *not* produced - see LUCENE-7799 for a possible eventual 
solution to this problem:

{code:java|title=TestSolrQueryParser.testPhrase()}
// "text" field's type has WordDelimiterGraphFilter (WDGFF) and 
autoGeneratePhraseQueries=true
// should generate a phrase of "now cow" and match only one doc
assertQ(req("q", "text:now-cow", "indent", "true", "sow","true")
, "//*[@numFound='1']"
);
// When sow=false, autoGeneratePhraseQueries=true only works when a graph is 
produced
// (i.e. overlapping terms, e.g. if WDGFF's preserveOriginal=1 or 
concatenateWords=1).
// The WDGFF config on the "text" field doesn't produce a graph, so the 
generated query
// is not a phrase query.  As a result, docs can match that don't match phrase 
query "now cow"
assertQ(req("q", "text:now-cow", "indent", "true", "sow","false")
, "//*[@numFound='2']"
);
assertQ(req("q", "text:now-cow", "indent", "true") // default sow=false
, "//*[@numFound='2']"
);
{code}

2. {{sow=false}} changes the queries edismax produces over multiple fields when 
any of the fields’ query-time analysis differs from the other fields’, e.g. if 
one field’s analyzer removes stopwords when another field’s doesn’t. In this 
case, rather than a dismax-query-per-whitespace-separated-term (edismax’s 
behavior when {{sow=true}}), a dismax-query-per-field is produced. This can 
change results in general, but quite significantly when combined with the 
{{mm}} (min-should-match) request parameter: since min-should-match applies per 
field instead of per term, missing terms in one field’s analysis won’t 
disqualify docs from matching.

{code:java|title=TestExtendedDismaxParser.testFocusQueryParser()}
assertQ(req("defType","edismax", "mm","100%", "q","Terminator: 100", 
"qf","movies_t foo_i", "sow","true"),
nor);
// When sow=false, the per-field query structures differ (no "Terminator" query 
on integer field foo_i),
// so a dismax-per-field is constructed.  As a result, mm=100% is applied 
per-field instead of per-term;
// since there is only one term (100) required in the foo_i field's dismax, the 
query can match docs that
// only have the 100 term in the foo_i field, and don't necessarily have 
"Terminator" in any field.
assertQ(req("defType","edismax", "mm","100%", "q","Terminator: 100", 
"qf","movies_t foo_i", "sow","false"),
oner);
assertQ(req("defType","edismax", "mm","100%", "q","Terminator: 100", 
"qf","movies_t foo_i"), // default sow=false
oner);
{code}

> By default, stop splitting on whitespace prior to analysis in edismax and 
> "Lucene"/standard query parsers
> -
>
> Key: SOLR-10310
> URL: https://issues.apache.org/jira/browse/SOLR-10310
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-10310.patch
>
>
> SOLR-9185 introduced an option on the edismax and standard query parsers to 
> not perform pre-analysis whitespace splitting: the {{sow=false}} request 
> param.
> On master/7.0, we should make {{sow=false}} the default.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7799) Classic query parser should allow autoGeneratePhraseQueries=true when splitOnWhitespace=false

2017-04-21 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7799:
---
Component/s: core/queryparser

> Classic query parser should allow autoGeneratePhraseQueries=true when 
> splitOnWhitespace=false
> -
>
> Key: LUCENE-7799
> URL: https://issues.apache.org/jira/browse/LUCENE-7799
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/queryparser
>Reporter: Steve Rowe
>
> LUCENE-7533 disabled the option combination 
> {{splitOnWhitespace=false}}/{{autoGeneratePhraseQueries=true}} because of how 
> {{autoGeneratePhraseQueries=true}} is handled: a query chunk is treated as if 
> it were literally quoted.  When {{splitOnWhitespace=false}}, a query chunk 
> can be multiple whitespace-separated words, and auto-quoting multiple terms 
> will produce inappropriate phrase queries.
> I have an idea about how to fix this: {{autoGeneratePhraseQueries=true}} is 
> supposed to cause phrase queries to be constructed when multiple analyzed 
> terms result from a single query word, e.g. when WordDelimiter(Graph)Filter 
> splits words up.  Maybe this could be re-implemented in terms of offsets, 
> since all terms from the same original term share the same offsets.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7799) Classic query parser should allow autoGeneratePhraseQueries=true when splitOnWhitespace=false

2017-04-21 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-7799:
--

 Summary: Classic query parser should allow 
autoGeneratePhraseQueries=true when splitOnWhitespace=false
 Key: LUCENE-7799
 URL: https://issues.apache.org/jira/browse/LUCENE-7799
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Steve Rowe


LUCENE-7533 disabled the option combination 
{{splitOnWhitespace=false}}/{{autoGeneratePhraseQueries=true}} because of how 
{{autoGeneratePhraseQueries=true}} is handled: a query chunk is treated as if 
it were literally quoted.  When {{splitOnWhitespace=false}}, a query chunk can 
be multiple whitespace-separated words, and auto-quoting multiple terms will 
produce inappropriate phrase queries.

I have an idea about how to fix this: {{autoGeneratePhraseQueries=true}} is 
supposed to cause phrase queries to be constructed when multiple analyzed terms 
result from a single query word, e.g. when WordDelimiter(Graph)Filter splits 
words up.  Maybe this could be re-implemented in terms of offsets, since all 
terms from the same original term share the same offsets.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10534) Core state doesn't get cleaned up if core initialization fails

2017-04-21 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979546#comment-15979546
 ] 

Varun Thacker commented on SOLR-10534:
--

This reproduces on Solr 6.5 and Solr 6.4.2

> Core state doesn't get cleaned up if core initialization fails
> --
>
> Key: SOLR-10534
> URL: https://issues.apache.org/jira/browse/SOLR-10534
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> Steps to reproduce:
> Add this to the "add-unknown-fields-to-the-schema" in the data_driven config 
> set
> {code}
> 
> _version_
> 
> {code}
> ./bin/solr start
> ./bin/solr create -c test
> This doesn't work currently because of SOLR-10533 so it throws an error when 
> creating a core.
> At this point I have no local core created for 'test' but the UI displays  
> 'SolrCore initalization failures'
> Now let's fix the schema and remove the update processor
> Running the create command again throws this error but I don't have this core.
> {code
> ~/solr-6.5.0$ ./bin/solr create -c test
> ERROR: 
> Core 'test' already exists!
> Checked core existence using Core API command:
> http://localhost:8983/solr/admin/cores?action=STATUS=test
> {code}
> When I restart and then try again it work correctly.
> The same happens in SolrCloud as well but the experience is even worse. 
> Since no collection was never created but a local core exists the create 
> command blocks and then after a while returns with an error. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris () - Build # 1263 - Failure!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1263/
Java: 

No tests ran.

Build Log:
[...truncated 15 lines...]
ERROR: SEVERE ERROR occurs
org.jenkinsci.lib.envinject.EnvInjectException: Cannot load file null from the 
master. Loading of files from the master is prohibited globally.
at 
org.jenkinsci.plugins.envinject.util.EnvInjectExceptionFormatter.forProhibitedLoadFromMaster(EnvInjectExceptionFormatter.java:19)
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.getEnvVarsPropertiesJobProperty(EnvInjectEnvVars.java:64)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironmentJobPropertyObject(EnvInjectListener.java:193)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:47)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:572)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1730)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-master-Solaris #1263
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-master-Solaris #1263
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-master-Solaris #1263
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-6.x-MacOSX () - Build # 829 - Failure!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/829/
Java: 

No tests ran.

Build Log:
[...truncated 13 lines...]
ERROR: SEVERE ERROR occurs
org.jenkinsci.lib.envinject.EnvInjectException: Cannot load file null from the 
master. Loading of files from the master is prohibited globally.
at 
org.jenkinsci.plugins.envinject.util.EnvInjectExceptionFormatter.forProhibitedLoadFromMaster(EnvInjectExceptionFormatter.java:19)
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.getEnvVarsPropertiesJobProperty(EnvInjectEnvVars.java:64)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironmentJobPropertyObject(EnvInjectListener.java:193)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:47)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:572)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1730)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-6.x-MacOSX #829
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-6.x-MacOSX #829
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-6.x-MacOSX #829
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-6.x-Windows () - Build # 852 - Still Failing!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/852/
Java: 

No tests ran.

Build Log:
[...truncated 9 lines...]
ERROR: SEVERE ERROR occurs
org.jenkinsci.lib.envinject.EnvInjectException: Cannot load file null from the 
master. Loading of files from the master is prohibited globally.
at 
org.jenkinsci.plugins.envinject.util.EnvInjectExceptionFormatter.forProhibitedLoadFromMaster(EnvInjectExceptionFormatter.java:19)
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.getEnvVarsPropertiesJobProperty(EnvInjectEnvVars.java:64)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironmentJobPropertyObject(EnvInjectListener.java:193)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:47)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:572)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1730)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-6.x-Windows #852
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-6.x-Windows #852
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-6.x-Windows #852
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Linux () - Build # 19451 - Failure!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/19451/
Java: 

No tests ran.

Build Log:
[...truncated 7 lines...]
ERROR: SEVERE ERROR occurs
org.jenkinsci.lib.envinject.EnvInjectException: Cannot load file null from the 
master. Loading of files from the master is prohibited globally.
at 
org.jenkinsci.plugins.envinject.util.EnvInjectExceptionFormatter.forProhibitedLoadFromMaster(EnvInjectExceptionFormatter.java:19)
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.getEnvVarsPropertiesJobProperty(EnvInjectEnvVars.java:64)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironmentJobPropertyObject(EnvInjectListener.java:193)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:47)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:572)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1730)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-master-Linux #19451
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-master-Linux #19451
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-master-Linux #19451
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-6.x-Linux () - Build # 3336 - Failure!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/3336/
Java: 

No tests ran.

Build Log:
[...truncated 9 lines...]
ERROR: SEVERE ERROR occurs
org.jenkinsci.lib.envinject.EnvInjectException: Cannot load file null from the 
master. Loading of files from the master is prohibited globally.
at 
org.jenkinsci.plugins.envinject.util.EnvInjectExceptionFormatter.forProhibitedLoadFromMaster(EnvInjectExceptionFormatter.java:19)
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.getEnvVarsPropertiesJobProperty(EnvInjectEnvVars.java:64)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironmentJobPropertyObject(EnvInjectListener.java:193)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:47)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:572)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1730)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-6.x-Linux #3336
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-6.x-Linux #3336
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-6.x-Linux #3336
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-MacOSX () - Build # 3966 - Failure!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3966/
Java: 

No tests ran.

Build Log:
[...truncated 11 lines...]
ERROR: SEVERE ERROR occurs
org.jenkinsci.lib.envinject.EnvInjectException: Cannot load file null from the 
master. Loading of files from the master is prohibited globally.
at 
org.jenkinsci.plugins.envinject.util.EnvInjectExceptionFormatter.forProhibitedLoadFromMaster(EnvInjectExceptionFormatter.java:19)
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.getEnvVarsPropertiesJobProperty(EnvInjectEnvVars.java:64)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironmentJobPropertyObject(EnvInjectListener.java:193)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:47)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:572)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1730)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-master-MacOSX #3966
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-master-MacOSX #3966
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-master-MacOSX #3966
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Windows () - Build # 6523 - Failure!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6523/
Java: 

No tests ran.

Build Log:
[...truncated 7 lines...]
ERROR: SEVERE ERROR occurs
org.jenkinsci.lib.envinject.EnvInjectException: Cannot load file null from the 
master. Loading of files from the master is prohibited globally.
at 
org.jenkinsci.plugins.envinject.util.EnvInjectExceptionFormatter.forProhibitedLoadFromMaster(EnvInjectExceptionFormatter.java:19)
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.getEnvVarsPropertiesJobProperty(EnvInjectEnvVars.java:64)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironmentJobPropertyObject(EnvInjectListener.java:193)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:47)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:572)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1730)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-master-Windows #6523
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-master-Windows #6523
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-master-Windows #6523
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-6.x-Solaris () - Build # 793 - Failure!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/793/
Java: 

No tests ran.

Build Log:
[...truncated 16 lines...]
ERROR: SEVERE ERROR occurs
org.jenkinsci.lib.envinject.EnvInjectException: Failed to evaluate the script
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.executeGroovyScript(EnvInjectEnvVars.java:232)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironmentJobPropertyObject(EnvInjectListener.java:191)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:47)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:572)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1730)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Caused by: 
org.jenkinsci.plugins.scriptsecurity.scripts.UnapprovedUsageException: script 
not yet approved for use
at 
org.jenkinsci.plugins.scriptsecurity.scripts.ScriptApproval.using(ScriptApproval.java:459)
at 
org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SecureGroovyScript.evaluate(SecureGroovyScript.java:170)
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.executeGroovyScript(EnvInjectEnvVars.java:230)
... 8 more
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-6.x-Solaris #793
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-6.x-Solaris #793
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-6.x-Solaris #793
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Solaris () - Build # 1262 - Failure!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1262/
Java: 

No tests ran.

Build Log:
[...truncated 14 lines...]
ERROR: SEVERE ERROR occurs
org.jenkinsci.lib.envinject.EnvInjectException: Failed to evaluate the script
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.executeGroovyScript(EnvInjectEnvVars.java:232)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironmentJobPropertyObject(EnvInjectListener.java:191)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:47)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:572)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1730)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Caused by: 
org.jenkinsci.plugins.scriptsecurity.scripts.UnapprovedUsageException: script 
not yet approved for use
at 
org.jenkinsci.plugins.scriptsecurity.scripts.ScriptApproval.using(ScriptApproval.java:459)
at 
org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SecureGroovyScript.evaluate(SecureGroovyScript.java:170)
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.executeGroovyScript(EnvInjectEnvVars.java:230)
... 8 more
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-master-Solaris #1262
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-master-Solaris #1262
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-master-Solaris #1262
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-10521) sort by string field of the nested child when searching with {!parent}

2017-04-21 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-10521:

Attachment: SOLR-10521.patch

[^SOLR-10521.patch] here we go!
the valuesource syntax is {{sort=childfield(field,$bjq) asc}} or 
{{sort=childfield(field) asc}} assuming {{$q}}.
The sad thing is that, I cant' improve QueryComponent change (it's brilliant 
already).
So, far it can be only used for sorting. But can be extended to regular value 
source functionality in future. This might even work numerics, but I havent' 
checked it yet.
Reviews and suggestions are urgently required!   

> sort by string field of the nested child when searching with {!parent}
> --
>
> Key: SOLR-10521
> URL: https://issues.apache.org/jira/browse/SOLR-10521
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-10521.patch, SOLR-10521.patch, SOLR-10521-raw.patch
>
>
> The idea is to integrate Lucene's {{ToParentBlockJoinSortField}} 
> The approach to hookup it is a little bit tricky:
> {{sort=\{!childfield bjq=$q field=COLOR_s}desc}}
> the question no.1 wdyt about the syntax? 
> internally it creates a function query with valueSource which produces 
> {{ToParentBlockJoinSortField}} 
> The main challenge is picking Solr field type from  
> {{ToParentBlockJoinSortField}}, because as-is it's broken on {{mergeIds}} - 
> ByteRefs need to be properly marshared and unmarshalled by a field type from 
> child scope. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX () - Build # 3965 - Failure!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3965/
Java: 

No tests ran.

Build Log:
[...truncated 10 lines...]
ERROR: SEVERE ERROR occurs
org.jenkinsci.lib.envinject.EnvInjectException: Failed to evaluate the script
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.executeGroovyScript(EnvInjectEnvVars.java:232)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironmentJobPropertyObject(EnvInjectListener.java:191)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:47)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:572)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1730)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Caused by: 
org.jenkinsci.plugins.scriptsecurity.scripts.UnapprovedUsageException: script 
not yet approved for use
at 
org.jenkinsci.plugins.scriptsecurity.scripts.ScriptApproval.using(ScriptApproval.java:459)
at 
org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SecureGroovyScript.evaluate(SecureGroovyScript.java:170)
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.executeGroovyScript(EnvInjectEnvVars.java:230)
... 8 more
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-master-MacOSX #3965
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-master-MacOSX #3965
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-master-MacOSX #3965
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-6.x-Windows () - Build # 851 - Still Failing!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/851/
Java: 

No tests ran.

Build Log:
[...truncated 8 lines...]
ERROR: SEVERE ERROR occurs
org.jenkinsci.lib.envinject.EnvInjectException: Failed to evaluate the script
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.executeGroovyScript(EnvInjectEnvVars.java:232)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironmentJobPropertyObject(EnvInjectListener.java:191)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:47)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:572)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1730)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Caused by: 
org.jenkinsci.plugins.scriptsecurity.scripts.UnapprovedUsageException: script 
not yet approved for use
at 
org.jenkinsci.plugins.scriptsecurity.scripts.ScriptApproval.using(ScriptApproval.java:459)
at 
org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SecureGroovyScript.evaluate(SecureGroovyScript.java:170)
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.executeGroovyScript(EnvInjectEnvVars.java:230)
... 8 more
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-6.x-Windows #851
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-6.x-Windows #851
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-6.x-Windows #851
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-6.x-MacOSX () - Build # 828 - Failure!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/828/
Java: 

No tests ran.

Build Log:
[...truncated 12 lines...]
ERROR: SEVERE ERROR occurs
org.jenkinsci.lib.envinject.EnvInjectException: Failed to evaluate the script
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.executeGroovyScript(EnvInjectEnvVars.java:232)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironmentJobPropertyObject(EnvInjectListener.java:191)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:47)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:572)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1730)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
Caused by: 
org.jenkinsci.plugins.scriptsecurity.scripts.UnapprovedUsageException: script 
not yet approved for use
at 
org.jenkinsci.plugins.scriptsecurity.scripts.ScriptApproval.using(ScriptApproval.java:459)
at 
org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SecureGroovyScript.evaluate(SecureGroovyScript.java:170)
at 
org.jenkinsci.plugins.envinject.service.EnvInjectEnvVars.executeGroovyScript(EnvInjectEnvVars.java:230)
... 8 more
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-6.x-MacOSX #828
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-6.x-MacOSX #828
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-6.x-MacOSX #828
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Windows () - Build # 6522 - Failure!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/6522/
Java: 

No tests ran.

Build Log:
[...truncated 2 lines...]
FATAL: java.io.IOException: Unexpected termination of the channel
java.io.EOFException
at 
java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2624)
at 
java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3099)
at 
java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:853)
at java.io.ObjectInputStream.(ObjectInputStream.java:349)
at 
hudson.remoting.ObjectInputStreamEx.(ObjectInputStreamEx.java:48)
at 
hudson.remoting.AbstractSynchronousByteArrayCommandTransport.read(AbstractSynchronousByteArrayCommandTransport.java:34)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:59)
Caused: java.io.IOException: Unexpected termination of the channel
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:73)
Caused: hudson.remoting.RequestAbortedException
at hudson.remoting.Request.abort(Request.java:307)
at hudson.remoting.Channel.terminate(Channel.java:896)
at 
hudson.remoting.SynchronousCommandTransport$ReaderThread.run(SynchronousCommandTransport.java:92)
at ..remote call to Windows VBOX(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1545)
at hudson.remoting.Request.call(Request.java:172)
at hudson.remoting.Channel.call(Channel.java:829)
at hudson.FilePath.act(FilePath.java:1080)
at 
org.jenkinsci.plugins.envinject.service.EnvironmentVariablesNodeLoader.gatherEnvironmentVariablesNode(EnvironmentVariablesNodeLoader.java:48)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.loadEnvironmentVariablesNode(EnvInjectListener.java:80)
at 
org.jenkinsci.plugins.envinject.EnvInjectListener.setUpEnvironment(EnvInjectListener.java:42)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.createLauncher(AbstractBuild.java:572)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:492)
at hudson.model.Run.execute(Run.java:1730)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:405)
ERROR: Step ‘Archive the artifacts’ failed: no workspace for 
Lucene-Solr-master-Windows #6522
ERROR: Step ‘Scan for compiler warnings’ failed: no workspace for 
Lucene-Solr-master-Windows #6522
ERROR: Step ‘Publish JUnit test result report’ failed: no workspace for 
Lucene-Solr-master-Windows #6522
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-10295) Decide online location for Ref Guide HTML pages

2017-04-21 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979461#comment-15979461
 ] 

Cassandra Targett commented on SOLR-10295:
--

So, from these links and other browsing/searching around, I've learned a few 
things:

* There are 2 dedicated VMs for building docs, and it seems all that's required 
is adding the right tag to the job ({{git-websites}})
* gitpubsub is described here: https://www.apache.org/dev/project-site.html and 
here:https://www.apache.org/dev/gitpubsub.html
* There is some kind of Jekyll support already integrated into either these VMs 
or the gitpubsub process
* I have no good citation for this, but several issues I've looked at seem to 
imply that you use EITHER the CMS OR gitpubsub, not a combo of both
* As stated in the Jenkins node labels page Steve provided, if you use 
gitpubsub you also have to have all your generated docs in a branch "asf-site", 
which I think is because the generated files are committed back into the 
project and then pushed via gitpubsub to webservers.


> Decide online location for Ref Guide HTML pages
> ---
>
> Key: SOLR-10295
> URL: https://issues.apache.org/jira/browse/SOLR-10295
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>
> One of the biggest decisions we need to make is where to put the new Solr Ref 
> Guide. Confluence at least had the whole web-hosting bits figured out; we 
> have to figure that out on our own.
> An obvious (maybe only to me) choice is to integrate the Ref Guide with the 
> Solr Website. However, due to the size of the Solr Ref Guide (nearly 200 
> pages), I believe trying to publish it solely with existing CMS tools will 
> create problems similar to those described in the Lucene ReleaseTodo when it 
> comes to publishing the Lucene/Solr javadocs (see 
> https://wiki.apache.org/lucene-java/ReleaseTodo#Website_.2B-.3D_javadocs).
> A solution exists already, and it's what is done for the javadocs. From the 
> above link:
> {quote}
> The solution: skip committing javadocs to the source tree, then staging, then 
> publishing, and instead commit javadocs directly to the production tree. 
> Ordinarily this would be problematic, because the CMS wants to keep the 
> production tree in sync with the staging tree, so anything it finds in the 
> production tree that's not in the staging tree gets nuked. However, the CMS 
> has a built-in mechanism to allow exceptions to the 
> keep-production-in-sync-with-staging rule: extpaths.txt.
> {quote}
> This solution (for those who don't know already) is to provide a static text 
> file (extpaths.txt) that includes the javadoc paths that should be presented 
> in production, but which won't exist in CMS staging environments. This way, 
> we can publish HTML files directly to production and they will be preserved 
> when the staging-production trees are synced.
> The rest of the process would be quite similar to what is documented in the 
> ReleaseTodo in sections following the link above - use SVN to update the CMS 
> production site and update extpaths.txt properly. We'd do this in the 
> {{solr}} section of the CMS obviously, and not the {{lucene}} section.
> A drawback to this approach is that we won't have a staging area to view the 
> Guide before publication. Files would be generated and go to production 
> directly. We may want to put a process in place to give some additional 
> confidence that things look right first (someone's people.apache.org 
> directory? a pre-pub validation script that tests...something...?), and agree 
> on what we'd be voting on when a vote to release comes up. However, the CMS 
> is pretty much the only option that I can think of...other ideas are welcome 
> if they might work.
> We also need to agree on URL paths that make sense, considering we'll have a 
> new "site" for each major release - something like 
> {{http://lucene.apache.org/solr/ref-guide/6_1}} might work? Other thoughts 
> are welcome on this point also.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9120) o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- NoSuchFileException

2017-04-21 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-9120:
---
Attachment: SOLR-9120.patch

Attaching a new patch based on the patch suggested by [~hgadre].

Instead of just changing the LukeRequestHandler#getIndexInfo method signature, 
I have deprecated the original signature.  If this is committed, the method 
should be removed in the commit to master.  I have no idea whether user plugin 
code will be using this method, but since it is public, I think we have to 
assume that somebody's going to depend on it.

If one of you who has had a failure with the backup feature can invent a test 
for the backup feature that fails without the patch and passes with it, that 
would be the best way to ensure that the fix is good.

The patch passes precommit.  Still working on the default Solr test suite.


> o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- 
> NoSuchFileException
> 
>
> Key: SOLR-9120
> URL: https://issues.apache.org/jira/browse/SOLR-9120
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Markus Jelsma
> Attachments: SOLR-9120.patch
>
>
> On Solr 6.0, we frequently see the following errors popping up:
> {code}
> java.nio.file.NoSuchFileException: 
> /var/lib/solr/logs_shard2_replica1/data/index/segments_2c5
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>   at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>   at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.Files.size(Files.java:2332)
>   at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:131)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:597)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:585)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:137)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> 

[jira] [Updated] (SOLR-10521) sort by string field of the nested child when searching with {!parent}

2017-04-21 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-10521:

Attachment: SOLR-10521.patch

[^SOLR-10521.patch] done with equality testing, has a cool cloud test.

I want to experiment with value source syntax, WDYT about 
{{sort=childfield(name,$q) asc}}?

> sort by string field of the nested child when searching with {!parent}
> --
>
> Key: SOLR-10521
> URL: https://issues.apache.org/jira/browse/SOLR-10521
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-10521.patch, SOLR-10521-raw.patch
>
>
> The idea is to integrate Lucene's {{ToParentBlockJoinSortField}} 
> The approach to hookup it is a little bit tricky:
> {{sort=\{!childfield bjq=$q field=COLOR_s}desc}}
> the question no.1 wdyt about the syntax? 
> internally it creates a function query with valueSource which produces 
> {{ToParentBlockJoinSortField}} 
> The main challenge is picking Solr field type from  
> {{ToParentBlockJoinSortField}}, because as-is it's broken on {{mergeIds}} - 
> ByteRefs need to be properly marshared and unmarshalled by a field type from 
> child scope. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10521) sort by string field of the nested child when searching with {!parent}

2017-04-21 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10521?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-10521:

Attachment: (was: SOLR-10500.patch)

> sort by string field of the nested child when searching with {!parent}
> --
>
> Key: SOLR-10521
> URL: https://issues.apache.org/jira/browse/SOLR-10521
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mikhail Khludnev
> Attachments: SOLR-10521.patch, SOLR-10521-raw.patch
>
>
> The idea is to integrate Lucene's {{ToParentBlockJoinSortField}} 
> The approach to hookup it is a little bit tricky:
> {{sort=\{!childfield bjq=$q field=COLOR_s}desc}}
> the question no.1 wdyt about the syntax? 
> internally it creates a function query with valueSource which produces 
> {{ToParentBlockJoinSortField}} 
> The main challenge is picking Solr field type from  
> {{ToParentBlockJoinSortField}}, because as-is it's broken on {{mergeIds}} - 
> ByteRefs need to be properly marshared and unmarshalled by a field type from 
> child scope. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8762) DIH entity child=true should respond nested documents on debug

2017-04-21 Thread gopikannan venugopalsamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gopikannan venugopalsamy updated SOLR-8762:
---
Attachment: (was: SOLR-8762.patch)

> DIH entity child=true should respond nested documents on debug
> --
>
> Key: SOLR-8762
> URL: https://issues.apache.org/jira/browse/SOLR-8762
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: newbie, newdev
> Attachments: SOLR-8762.patch
>
>
> Problem is described in 
> [comment|https://issues.apache.org/jira/browse/SOLR-5147?focusedCommentId=14744852=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14744852]
>  of SOLR-5147 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8762) DIH entity child=true should respond nested documents on debug

2017-04-21 Thread gopikannan venugopalsamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

gopikannan venugopalsamy updated SOLR-8762:
---
Attachment: SOLR-8762.patch

[~mkhludnev] last patch was incomplete, Please check this.

> DIH entity child=true should respond nested documents on debug
> --
>
> Key: SOLR-8762
> URL: https://issues.apache.org/jira/browse/SOLR-8762
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - DataImportHandler
>Reporter: Mikhail Khludnev
>Priority: Minor
>  Labels: newbie, newdev
> Attachments: SOLR-8762.patch, SOLR-8762.patch
>
>
> Problem is described in 
> [comment|https://issues.apache.org/jira/browse/SOLR-5147?focusedCommentId=14744852=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14744852]
>  of SOLR-5147 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9120) o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- NoSuchFileException

2017-04-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979316#comment-15979316
 ] 

Shawn Heisey commented on SOLR-9120:


If you can build a test that reproduces the problem, which then passes with 
your patch, that would be VERY helpful.

> o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- 
> NoSuchFileException
> 
>
> Key: SOLR-9120
> URL: https://issues.apache.org/jira/browse/SOLR-9120
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Markus Jelsma
>
> On Solr 6.0, we frequently see the following errors popping up:
> {code}
> java.nio.file.NoSuchFileException: 
> /var/lib/solr/logs_shard2_replica1/data/index/segments_2c5
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>   at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>   at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.Files.size(Files.java:2332)
>   at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:131)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:597)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:585)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:137)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To 

[jira] [Commented] (SOLR-9120) o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- NoSuchFileException

2017-04-21 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979311#comment-15979311
 ] 

Shawn Heisey commented on SOLR-9120:


I do not know the code well enough to say whether your fix is 100% correct.

Is there any way to get that github commit as a diff that I can apply?

> o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- 
> NoSuchFileException
> 
>
> Key: SOLR-9120
> URL: https://issues.apache.org/jira/browse/SOLR-9120
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Markus Jelsma
>
> On Solr 6.0, we frequently see the following errors popping up:
> {code}
> java.nio.file.NoSuchFileException: 
> /var/lib/solr/logs_shard2_replica1/data/index/segments_2c5
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>   at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>   at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.Files.size(Files.java:2332)
>   at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:131)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:597)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:585)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:137)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (LUCENE-7798) add equals/hashCode to ToParentBlockJoinSortField

2017-04-21 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned LUCENE-7798:


Assignee: Mikhail Khludnev

> add equals/hashCode to ToParentBlockJoinSortField
> -
>
> Key: LUCENE-7798
> URL: https://issues.apache.org/jira/browse/LUCENE-7798
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/join
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>
> Since SOLR-10521 {{ToParentBlockJoinSortField}} is going to be used as query 
> result key, therefore it's worth to implemend proper equality methods.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7798) add equals/hashCode to ToParentBlockJoinSortField

2017-04-21 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created LUCENE-7798:


 Summary: add equals/hashCode to ToParentBlockJoinSortField
 Key: LUCENE-7798
 URL: https://issues.apache.org/jira/browse/LUCENE-7798
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/join
Reporter: Mikhail Khludnev


Since SOLR-10521 {{ToParentBlockJoinSortField}} is going to be used as query 
result key, therefore it's worth to implemend proper equality methods.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7796) Make reThrow idiom declare RuntimeException return type so callers may use it in a way that compiler knows subsequent code is unreachable

2017-04-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979298#comment-15979298
 ] 

Hoss Man commented on LUCENE-7796:
--

Strawman:

* add {{Error rethrowAsUnchecked(...)}} to work the way this issue suggests it 
should
** or {{RuntimeException}} if you prefer -- no opinion, just going based on the 
cast currently done in {{Rethrow}}
** personally i think the "Sneaky" code looks sketchy as hell and would feel a 
lot more comfortable using the more obvious and easy to understand {{throw new 
RuntimeException(t)}}
* rename existing {{void rethrow(...)}} methods to {{void 
rethrowIfNonNull(...)}} (they can delegate to {{rethrowAsUnchecked}})
* some existing type specific impls (like the {{IOException}} version in 
{{IOUtils}} can/should probably be renamed to be clear what they're for and 
have javadoc (links back to the "main" versions) justifying their differences

bq. A cleanup of other "rethrow-hack" places would require moving Rethrow from 
test utils back to the core and I think (but didn't look further) we actually 
moved it the other way around so as to prevent people from using it... So I'm a 
bit lost as to which direction I should go with this patch

If the idiom/code is in use (outside of tests), then the code is in use -- we 
could consider those usages "bugs" and "fix" the code to not need a 
{{reThrow(...)}} type function, but as long as that code is going to exist, 
having one copy of it (in core) seems better then having 2,3,5,100 copies of it.

bq. // TODO: remove the impl in test-framework, this one is more elegant :-)

wait ... am i missing something? aren't those 2 impls identical?

> Make reThrow idiom declare RuntimeException return type so callers may use it 
> in a way that compiler knows subsequent code is unreachable
> -
>
> Key: LUCENE-7796
> URL: https://issues.apache.org/jira/browse/LUCENE-7796
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 6.x, master (7.0)
>
>
> A spinoff from LUCENE-7792: reThrow can be declared to return an unchecked 
> exception so that callers can choose to use {{throw reThrow(...)}} as an 
> idiom to let the compiler know any subsequent code will be unreachable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7796) Make reThrow idiom declare RuntimeException return type so callers may use it in a way that compiler knows subsequent code is unreachable

2017-04-21 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979293#comment-15979293
 ] 

Dawid Weiss commented on LUCENE-7796:
-

This is what I think IOUtils.reThrow* methods should look like:
{code}
  /**
   * An utility method that takes a previously caught
   * {@code Throwable} and rethrows either {@code
   * IOException} if it the argument was an {@code IOException}
   * wraps the argument in a {@code RuntimeException}.
   * 
   * @param th The throwable to rethrow, must not be null.
   * @return This method never returns normally, it always throws an exception. 
The return
   *   value of this method can be used for an idiom that informs the compiler 
that the code
   *   below the invocation of this method is unreachable:
   *   {@code throw reThrow(argument)}.  
   */
  public static RuntimeException reThrow(Throwable th) throws IOException {
if (th == null) {
  throw new AssertionError("reThrow doesn't accept null arguments.");
}
if (th instanceof IOException) {
  throw (IOException) th;
}
throw reThrowUnchecked(th);
  }

  /**
   * An utility method that takes a previously caught
   * {@code Throwable} and rethrows it if it's an 
   * unchecked exception or wraps it in a {@code RuntimeException} otherwise.
   * 
   * @param th The throwable to rethrow, must not be null.
   * @return This method never returns normally, it always throws an exception. 
The return
   *   value of this method can be used for an idiom that informs the compiler 
that the code
   *   below the invocation of this method is unreachable:
   *   {@code throw reThrowUnchecked(argument)}.  
   */
  public static RuntimeException reThrowUnchecked(Throwable th) {
if (th == null) {
  throw new AssertionError("reThrowUnchecked doesn't accept null 
arguments.");
}
if (th instanceof RuntimeException) {
  throw (RuntimeException) th;
}
if (th instanceof Error) {
  throw (Error) th;
}
throw new RuntimeException(th);
  }
{code}

> Make reThrow idiom declare RuntimeException return type so callers may use it 
> in a way that compiler knows subsequent code is unreachable
> -
>
> Key: LUCENE-7796
> URL: https://issues.apache.org/jira/browse/LUCENE-7796
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 6.x, master (7.0)
>
>
> A spinoff from LUCENE-7792: reThrow can be declared to return an unchecked 
> exception so that callers can choose to use {{throw reThrow(...)}} as an 
> idiom to let the compiler know any subsequent code will be unreachable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9674) Facet Values Sort Order: Index should ignore casing

2017-04-21 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979282#comment-15979282
 ] 

Yonik Seeley commented on SOLR-9674:


One idea how to implement this would be as a facet function that one could sort 
by that lowercases values.

{code}
{
  type: terms,
  field: myfield,
  sort: "lowerval".
  facet: {
lowerval : "lowercase(myfield)",
x : "otherstats(otherfield)"
  }
{code}


> Facet Values Sort Order: Index should ignore casing
> ---
>
> Key: SOLR-9674
> URL: https://issues.apache.org/jira/browse/SOLR-9674
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Luke P Warwick
>  Labels: facet, features
>
> Index facet value sorting puts all values that start with a capital letter 
> before all values that start with a lower case letter.  
> Since users often don't know this pattern, nor that the value they are 
> looking for might start with a lower case letter, it leads to confusion and 
> bad experiences, in particular in E-commerce with the 'Brand' Facet.
> Desired behavior is that the order ignore case. If two values are otherwise 
> equal (Patagonia,patagonia), put the capital first.
> Example:
> Current Order: Anon,Marmot,Patagonia,Smith,Zoot,maloja,prAna
> Desired Order: Anon,Marmot,maloja,Patagonia,prAna,Smith,Zoot



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7796) Make reThrow idiom declare RuntimeException return type so callers may use it in a way that compiler knows subsequent code is unreachable

2017-04-21 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979280#comment-15979280
 ] 

Dawid Weiss commented on LUCENE-7796:
-

I looked at the code now. The test-framework's {{Rethrow}} is what I originally 
had in mind, but I was mistaken about Mike's code -- this uses IOUtils.reThrow 
and there is a problem with it -- "If the argument is null then this method 
does nothing."... so in fact a call to IOUtils.reThrow *can* be a no-op, as is 
evident when one looks at how it's used:
{code}
  // NOTE: does nothing if firstThrowable is null
  IOUtils.reThrowUnchecked(firstThrowable);
{code}
or
{code}
if (success) {
  // Does nothing if firstExc is null:
  IOUtils.reThrow(firstExc);
}
{code}

we can make it return IOException (or something unchecked), but if the argument 
can be null it absolutely cannot be used with throw... Will this make code 
clearer or more confusing? For me, it was surprising the argument can be null 
(and ignored), so based on least surprise principle I'd make 
IOUtils.reThrow(exception) require a non-null argument and always throw 
internally. Then the return type can be used without side effects (either with 
throw or as a simple statement).

A cleanup of other "rethrow-hack" places would require moving Rethrow from test 
utils back to the core and I think (but didn't look further) we actually moved 
it the other way around so as to prevent people from using it... Although, on 
the other hand -- {{AttributeFactory}}:
{code}
  // Hack to rethrow unknown Exceptions from {@link MethodHandle#invoke}:
  // TODO: remove the impl in test-framework, this one is more elegant :-)
  static void rethrow(Throwable t) {
...
{code}

So I'm a bit lost as to which direction I should go with this patch. ;)


> Make reThrow idiom declare RuntimeException return type so callers may use it 
> in a way that compiler knows subsequent code is unreachable
> -
>
> Key: LUCENE-7796
> URL: https://issues.apache.org/jira/browse/LUCENE-7796
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 6.x, master (7.0)
>
>
> A spinoff from LUCENE-7792: reThrow can be declared to return an unchecked 
> exception so that callers can choose to use {{throw reThrow(...)}} as an 
> idiom to let the compiler know any subsequent code will be unreachable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9527) Solr RESTORE api doesn't distribute the replicas uniformly

2017-04-21 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979277#comment-15979277
 ] 

Varun Thacker commented on SOLR-9527:
-

Thanks David for the review. I'll go through it and commit in on Monday

> Solr RESTORE api doesn't distribute the replicas uniformly 
> ---
>
> Key: SOLR-9527
> URL: https://issues.apache.org/jira/browse/SOLR-9527
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.1
>Reporter: Hrishikesh Gadre
>Assignee: Varun Thacker
> Attachments: SOLR-9527.patch, SOLR-9527.patch, SOLR-9527.patch, 
> SOLR-9527.patch, Solr 9527.pdf
>
>
> Please refer to this email thread for details,
> http://lucene.markmail.org/message/ycun4x5nx7lwj5sk?q=solr+list:org%2Eapache%2Elucene%2Esolr-user+order:date-backward=1



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9527) Solr RESTORE api doesn't distribute the replicas uniformly

2017-04-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979276#comment-15979276
 ] 

David Smiley commented on SOLR-9527:


+1 to the patch

> Solr RESTORE api doesn't distribute the replicas uniformly 
> ---
>
> Key: SOLR-9527
> URL: https://issues.apache.org/jira/browse/SOLR-9527
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.1
>Reporter: Hrishikesh Gadre
>Assignee: Varun Thacker
> Attachments: SOLR-9527.patch, SOLR-9527.patch, SOLR-9527.patch, 
> SOLR-9527.patch, Solr 9527.pdf
>
>
> Please refer to this email thread for details,
> http://lucene.markmail.org/message/ycun4x5nx7lwj5sk?q=solr+list:org%2Eapache%2Elucene%2Esolr-user+order:date-backward=1



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9527) Solr RESTORE api doesn't distribute the replicas uniformly

2017-04-21 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979271#comment-15979271
 ] 

David Smiley commented on SOLR-9527:


+1 to the patch

> Solr RESTORE api doesn't distribute the replicas uniformly 
> ---
>
> Key: SOLR-9527
> URL: https://issues.apache.org/jira/browse/SOLR-9527
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.1
>Reporter: Hrishikesh Gadre
>Assignee: Varun Thacker
> Attachments: SOLR-9527.patch, SOLR-9527.patch, SOLR-9527.patch, 
> SOLR-9527.patch, Solr 9527.pdf
>
>
> Please refer to this email thread for details,
> http://lucene.markmail.org/message/ycun4x5nx7lwj5sk?q=solr+list:org%2Eapache%2Elucene%2Esolr-user+order:date-backward=1



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10295) Decide online location for Ref Guide HTML pages

2017-04-21 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979267#comment-15979267
 ] 

Steve Rowe commented on SOLR-10295:
---

bq. I guess one of us should talk to INFRA about what our options might be here.

I see on this INFRA wiki doc page that there are currently two Jenkins nodes 
(H19 and H20) that are intended to be used for website publishing: 
https://cwiki.apache.org/confluence/display/INFRA/Jenkins+node+labels

bq. Nodes that are reserved for ANY project that wants to build their website 
docs and publish directly live (requires asf-site and gitpubsub. See Docs here)

The Docs link is missing though :(.

I found this INFRA JIRA issue that appears to have some helpful info: 
https://issues.apache.org/jira/browse/INFRA-10722




> Decide online location for Ref Guide HTML pages
> ---
>
> Key: SOLR-10295
> URL: https://issues.apache.org/jira/browse/SOLR-10295
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>
> One of the biggest decisions we need to make is where to put the new Solr Ref 
> Guide. Confluence at least had the whole web-hosting bits figured out; we 
> have to figure that out on our own.
> An obvious (maybe only to me) choice is to integrate the Ref Guide with the 
> Solr Website. However, due to the size of the Solr Ref Guide (nearly 200 
> pages), I believe trying to publish it solely with existing CMS tools will 
> create problems similar to those described in the Lucene ReleaseTodo when it 
> comes to publishing the Lucene/Solr javadocs (see 
> https://wiki.apache.org/lucene-java/ReleaseTodo#Website_.2B-.3D_javadocs).
> A solution exists already, and it's what is done for the javadocs. From the 
> above link:
> {quote}
> The solution: skip committing javadocs to the source tree, then staging, then 
> publishing, and instead commit javadocs directly to the production tree. 
> Ordinarily this would be problematic, because the CMS wants to keep the 
> production tree in sync with the staging tree, so anything it finds in the 
> production tree that's not in the staging tree gets nuked. However, the CMS 
> has a built-in mechanism to allow exceptions to the 
> keep-production-in-sync-with-staging rule: extpaths.txt.
> {quote}
> This solution (for those who don't know already) is to provide a static text 
> file (extpaths.txt) that includes the javadoc paths that should be presented 
> in production, but which won't exist in CMS staging environments. This way, 
> we can publish HTML files directly to production and they will be preserved 
> when the staging-production trees are synced.
> The rest of the process would be quite similar to what is documented in the 
> ReleaseTodo in sections following the link above - use SVN to update the CMS 
> production site and update extpaths.txt properly. We'd do this in the 
> {{solr}} section of the CMS obviously, and not the {{lucene}} section.
> A drawback to this approach is that we won't have a staging area to view the 
> Guide before publication. Files would be generated and go to production 
> directly. We may want to put a process in place to give some additional 
> confidence that things look right first (someone's people.apache.org 
> directory? a pre-pub validation script that tests...something...?), and agree 
> on what we'd be voting on when a vote to release comes up. However, the CMS 
> is pretty much the only option that I can think of...other ideas are welcome 
> if they might work.
> We also need to agree on URL paths that make sense, considering we'll have a 
> new "site" for each major release - something like 
> {{http://lucene.apache.org/solr/ref-guide/6_1}} might work? Other thoughts 
> are welcome on this point also.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7796) Make reThrow idiom declare return type of Throwable so callers may use it in a way that compiler knows subsequent code is unreachable

2017-04-21 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979258#comment-15979258
 ] 

Dawid Weiss commented on LUCENE-7796:
-

That has to be an unchecked exception, Hoss, not Throwable (or the type of 
argument). The whole point of the rethrow hack is to rethrow a checked 
exception from a method that doesn't declare checked exceptions. 

I scanned the code quickly and the rethrow idiom is actually in a few places 
(AttributeFactory, Rethrow, SnowballProgram, IOUtils -- although slightly 
different here). It'd be a good chance to clean it up. IOUtils has a slightly 
different flavor, but all the others use the same idiom.

> Make reThrow idiom declare return type of Throwable so callers may use it in 
> a way that compiler knows subsequent code is unreachable
> -
>
> Key: LUCENE-7796
> URL: https://issues.apache.org/jira/browse/LUCENE-7796
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 6.x, master (7.0)
>
>
> A spinoff from LUCENE-7792: reThrow can be declared to return an unchecked 
> exception so that callers can choose to use {{throw reThrow(...)}} as an 
> idiom to let the compiler know any subsequent code will be unreachable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7796) Make reThrow idiom declare RuntimeException return type so callers may use it in a way that compiler knows subsequent code is unreachable

2017-04-21 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-7796:

Summary: Make reThrow idiom declare RuntimeException return type so callers 
may use it in a way that compiler knows subsequent code is unreachable  (was: 
Make reThrow idiom declare return type of Throwable so callers may use it in a 
way that compiler knows subsequent code is unreachable)

> Make reThrow idiom declare RuntimeException return type so callers may use it 
> in a way that compiler knows subsequent code is unreachable
> -
>
> Key: LUCENE-7796
> URL: https://issues.apache.org/jira/browse/LUCENE-7796
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 6.x, master (7.0)
>
>
> A spinoff from LUCENE-7792: reThrow can be declared to return an unchecked 
> exception so that callers can choose to use {{throw reThrow(...)}} as an 
> idiom to let the compiler know any subsequent code will be unreachable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10551) Add cat and list Streaming Expressions

2017-04-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10551:
--
Attachment: SOLR-10551.patch

> Add cat and list Streaming Expressions
> --
>
> Key: SOLR-10551
> URL: https://issues.apache.org/jira/browse/SOLR-10551
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
> Attachments: SOLR-10551.patch
>
>
> The *cat* expression concatenates streams. Sample syntax:
> {code}
> cat(expr,expr, ...)
> {code}
> This *list* expression gathers the tuples from a stream and flattens them 
> into a list in a single tuple. Sample syntax:
> {code}
> list(name, expr)
> {code}
> The *name* param is the key in the tuple pointing to the list.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9333) Add list and tup Streaming Expressions

2017-04-21 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9333?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-9333.
--
Resolution: Duplicate

Based on the work on SOLR-10551, I think this ticket can be closed. We can 
consider the tup expression in another ticket.

> Add list and tup Streaming Expressions
> --
>
> Key: SOLR-9333
> URL: https://issues.apache.org/jira/browse/SOLR-9333
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
> Attachments: SOLR-9333.patch, SOLR-9333.patch, SOLR-9333.patch
>
>
> Currently in streaming expression, if we wanna provide a list of named/unamed 
> parameters. We wrap the values inside a StreamExpressionValue and manually 
> separate value based on ',' character. 
> In order to make the streaming expression more compact and extendable, I 
> propose new kind of parameter to streaming expression. 
> {code}
> Map Expression Parameter : { key1 = value1, key2 = value2,  }
> {code}
> So new tuple stream can be 
> {code}
> arrays({term=a, score=1.0}, {term=b, score=2.0})
> or
> search(collection1, q=*:*, fl="id,a_s,a_i,a_f", sort="a_f asc, a_i asc", 
> aliases={a_i=alias.a_i, a_s=name})
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10551) Add cat and list Streaming Expressions

2017-04-21 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-10551:
-

 Summary: Add cat and list Streaming Expressions
 Key: SOLR-10551
 URL: https://issues.apache.org/jira/browse/SOLR-10551
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


The *cat* expression concatenates streams. Sample syntax:

{code}
cat(expr,expr, ...)
{code}

This *list* expression gathers the tuples from a stream and flattens them into 
a list in a single tuple. Sample syntax:

{code}
list(name, expr)
{code}

The *name* param is the key in the tuple pointing to the list.





--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10295) Decide online location for Ref Guide HTML pages

2017-04-21 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979241#comment-15979241
 ] 

Cassandra Targett commented on SOLR-10295:
--

bq. Most commits would be master or branch_6x and trigger a refGuide build of 
unreleased next-ver docs, no problem with flux there. Any asciidoc commits on 
release-branches (e.g. branch_6_5) would be few and minor, so should not create 
much flux either.

Yeah, these are good points. I won't worry about flux then.

bq. If we want (near-)immediate builds triggered by commits, we'll have to use 
a different Jenkins machine than the lucene VM, because it has only one 
executor; if I understand Jenkins' model correctly, currently running jobs (and 
queued jobs too, I think) will take precedence over newly triggered jobs.

That's my understanding of the model also. Any job running tests shouldn't be 
stopped for docs anyway. But it seems to me that VM is always doing something. 
Looking at the load statistics for the {{lucene}} VM for the past couple of 
weeks, the queue length is always between 10-15 jobs...it could be a really 
long time before the docs are built.

I guess one of us should talk to INFRA about what our options might be here.

> Decide online location for Ref Guide HTML pages
> ---
>
> Key: SOLR-10295
> URL: https://issues.apache.org/jira/browse/SOLR-10295
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>
> One of the biggest decisions we need to make is where to put the new Solr Ref 
> Guide. Confluence at least had the whole web-hosting bits figured out; we 
> have to figure that out on our own.
> An obvious (maybe only to me) choice is to integrate the Ref Guide with the 
> Solr Website. However, due to the size of the Solr Ref Guide (nearly 200 
> pages), I believe trying to publish it solely with existing CMS tools will 
> create problems similar to those described in the Lucene ReleaseTodo when it 
> comes to publishing the Lucene/Solr javadocs (see 
> https://wiki.apache.org/lucene-java/ReleaseTodo#Website_.2B-.3D_javadocs).
> A solution exists already, and it's what is done for the javadocs. From the 
> above link:
> {quote}
> The solution: skip committing javadocs to the source tree, then staging, then 
> publishing, and instead commit javadocs directly to the production tree. 
> Ordinarily this would be problematic, because the CMS wants to keep the 
> production tree in sync with the staging tree, so anything it finds in the 
> production tree that's not in the staging tree gets nuked. However, the CMS 
> has a built-in mechanism to allow exceptions to the 
> keep-production-in-sync-with-staging rule: extpaths.txt.
> {quote}
> This solution (for those who don't know already) is to provide a static text 
> file (extpaths.txt) that includes the javadoc paths that should be presented 
> in production, but which won't exist in CMS staging environments. This way, 
> we can publish HTML files directly to production and they will be preserved 
> when the staging-production trees are synced.
> The rest of the process would be quite similar to what is documented in the 
> ReleaseTodo in sections following the link above - use SVN to update the CMS 
> production site and update extpaths.txt properly. We'd do this in the 
> {{solr}} section of the CMS obviously, and not the {{lucene}} section.
> A drawback to this approach is that we won't have a staging area to view the 
> Guide before publication. Files would be generated and go to production 
> directly. We may want to put a process in place to give some additional 
> confidence that things look right first (someone's people.apache.org 
> directory? a pre-pub validation script that tests...something...?), and agree 
> on what we'd be voting on when a vote to release comes up. However, the CMS 
> is pretty much the only option that I can think of...other ideas are welcome 
> if they might work.
> We also need to agree on URL paths that make sense, considering we'll have a 
> new "site" for each major release - something like 
> {{http://lucene.apache.org/solr/ref-guide/6_1}} might work? Other thoughts 
> are welcome on this point also.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10550) Improve FileFloatSource eviction // reduce FileFloatSource memory footprint

2017-04-21 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Torsten Bøgh Köster updated SOLR-10550:
---
Attachment: solr_filefloatsource.patch

> Improve FileFloatSource eviction // reduce FileFloatSource memory footprint
> ---
>
> Key: SOLR-10550
> URL: https://issues.apache.org/jira/browse/SOLR-10550
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 6.5
>Reporter: Torsten Bøgh Köster
> Attachments: solr_filefloatsource.patch
>
>
> As a follow up from {{SOLR-10506}} we found another possible memory leak in 
> Solr. The values generated from an {{ExternalFileField}} are cached in a 
> static cache inside the {{FileFloatSource}}. That cache caches both a 
> {{IndexReader}} and {{FileFloatSource}}s loaded using that {{IndexReader}}.
> Cache eviction is left to the internally used {{WeakHashMap}} or a full 
> eviction can be triggered via url. We are dealing with large synonym files 
> and word lists stored in managed resources. Those are tied to the 
> {{SolrCore}} as described in {{SOLR-10506}}. We're also using 
> {{ExternalFileField}}s whose {{FileFloatSource}} are cached in said static 
> cache. The {{FileFloatSource}} hold strong (transitive) references to the 
> {{SolrCore}} they have been created for. 
> After a couple of collection reloads, the cache eviction mechanism of the 
> {{WeakHashMap}} gets activated pretty close to heap exhaustion. The patch 
> attached adds a mechanism to evict cache entries created in the context of a 
> {{SolrCore}} upon it's close using a close hook in the 
> {{ExternalFileFieldReloader}}. It furthermore adds a static cache reset 
> method for all entries bound to a given {{IndexReader}}. I'm not sure, if the 
> added cache resets are too aggressive or executed too often, I'd like to 
> leave that to the experts ;-)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10550) Improve FileFloatSource eviction // reduce FileFloatSource memory footprint

2017-04-21 Thread JIRA
Torsten Bøgh Köster created SOLR-10550:
--

 Summary: Improve FileFloatSource eviction // reduce 
FileFloatSource memory footprint
 Key: SOLR-10550
 URL: https://issues.apache.org/jira/browse/SOLR-10550
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Server
Affects Versions: 6.5
Reporter: Torsten Bøgh Köster
 Attachments: solr_filefloatsource.patch

As a follow up from {{SOLR-10506}} we found another possible memory leak in 
Solr. The values generated from an {{ExternalFileField}} are cached in a static 
cache inside the {{FileFloatSource}}. That cache caches both a {{IndexReader}} 
and {{FileFloatSource}}s loaded using that {{IndexReader}}.

Cache eviction is left to the internally used {{WeakHashMap}} or a full 
eviction can be triggered via url. We are dealing with large synonym files and 
word lists stored in managed resources. Those are tied to the {{SolrCore}} as 
described in {{SOLR-10506}}. We're also using {{ExternalFileField}}s whose 
{{FileFloatSource}} are cached in said static cache. The {{FileFloatSource}} 
hold strong (transitive) references to the {{SolrCore}} they have been created 
for. 

After a couple of collection reloads, the cache eviction mechanism of the 
{{WeakHashMap}} gets activated pretty close to heap exhaustion. The patch 
attached adds a mechanism to evict cache entries created in the context of a 
{{SolrCore}} upon it's close using a close hook in the 
{{ExternalFileFieldReloader}}. It furthermore adds a static cache reset method 
for all entries bound to a given {{IndexReader}}. I'm not sure, if the added 
cache resets are too aggressive or executed too often, I'd like to leave that 
to the experts ;-)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10439) SchemaField.getNamedPropertyValues doesn't include 'large'

2017-04-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979228#comment-15979228
 ] 

Hoss Man commented on SOLR-10439:
-

NOTE: opened SOLR-10549 since fixing that is an independent bug from the test 
improvements suggested in SOLR-10518. 

> SchemaField.getNamedPropertyValues doesn't include 'large'
> --
>
> Key: SOLR-10439
> URL: https://issues.apache.org/jira/browse/SOLR-10439
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 6.5.1, 6.6, master (7.0)
>
> Attachments: SOLR_10439.patch
>
>
> I just noticed SchemaField.getNamedPropertyValues refers to most/all of 
> FieldProperties constants. This is a maintenance problem; it'd be nice if 
> there were a test to ensure properties don't get forgotten.  {{LARGE}} was 
> forgotten.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10549) /schema/fieldtypes doesn't include "large" if configured as a fieldType default

2017-04-21 Thread Hoss Man (JIRA)
Hoss Man created SOLR-10549:
---

 Summary: /schema/fieldtypes doesn't include "large" if configured 
as a fieldType default
 Key: SOLR-10549
 URL: https://issues.apache.org/jira/browse/SOLR-10549
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.5.1
Reporter: Hoss Man
Assignee: David Smiley


spin off of SOLR-10439 since that fix for {{/schema/fields}} is likely to be 
included in 6.5.1, so the comparable fix for  {{/schema/fieldtypess}} needs to 
be tracked in it's own jira



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[no subject]

2017-04-21 Thread Mike Drob
+1 (non-binding)

Automated tests were happy for me -- SUCCESS! [1:16:35.519232]

I ran a Java API compatibility check between 6.5.0 and this 6.5.1 and while 
there's a few changes, I don't think any of them would actually end up as user 
facing.
Posted results at http://people.apache.org/~mdrob/lucene-6.5.1/

Mike

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1261 - Still Unstable!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1261/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.HdfsTransactionLog  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:132)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:203)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:159)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:116)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:107)
  at sun.reflect.GeneratedConstructorAccessor176.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:784)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:846)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1101)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:971)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:854)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:958)  at 
org.apache.solr.core.CoreContainer.lambda$load$7(CoreContainer.java:593)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.HdfsTransactionLog
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:132)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:203)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:159)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:116)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:107)
at sun.reflect.GeneratedConstructorAccessor176.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:784)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:846)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1101)
at org.apache.solr.core.SolrCore.(SolrCore.java:971)
at org.apache.solr.core.SolrCore.(SolrCore.java:854)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:958)
at 
org.apache.solr.core.CoreContainer.lambda$load$7(CoreContainer.java:593)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([72E3FC4BEF482461]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:302)
at sun.reflect.GeneratedMethodAccessor66.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 

[jira] [Commented] (SOLR-10480) Offset does not allow for full pagination in JSON Facet API

2017-04-21 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979095#comment-15979095
 ] 

Maxime Darçot commented on SOLR-10480:
--

This is definitely an issue about the pagination and not about the estimate of 
numBuckets. Indeed if for one faceted query I put a limit of 100 I can go up to 
an offset of around 300 before the query doesn't return anything anymore (which 
would mean that there are around 300 buckets), while with the same query but a 
limit of 10'000, I can go up to around 30'000 for the offset (which means that 
there are definitely a lot more than 300 buckets).

> Offset does not allow for full pagination in JSON Facet API
> ---
>
> Key: SOLR-10480
> URL: https://issues.apache.org/jira/browse/SOLR-10480
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, SolrCloud
>Affects Versions: 6.4.1
>Reporter: Maxime Darçot
>
> I have a SolrCloud cluster and when I use the JSON facet API to do term 
> faceting like this:
> bq. json.facet={"results":{"type": "terms", "field": "my_field", "limit": 
> 100, "offset": 100, "numBuckets": true}}
> it does work correctly. 
> However the _numBuckets_ tells me in return that I have more than 6 millions 
> buckets but as soon as I start to grow the _offset_ value to browse these 
> buckets, it doesn't return anything anymore (when I reach an offset of around 
> 300).
> What is even weirder is that if I put a bigger _limit_, like 10'000, I can 
> increase the _offset_ until around 29'000 before it doesn't return anything.
> And the returned _numBuckets_ doesn't change all the while.
> It is a big problem because we can't paginate till the end of the buckets.
> Might be related to SOLR-7452, I don't know...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10548) hyper-log-log based numBuckets for faceting

2017-04-21 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-10548:
---

 Summary: hyper-log-log based numBuckets for faceting
 Key: SOLR-10548
 URL: https://issues.apache.org/jira/browse/SOLR-10548
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Yonik Seeley


numBuckets currently uses an estimate (same as the unique function detailed at 
http://yonik.com/solr-count-distinct/ ).  We should either change 
implementations or introduce a way to optionally select a hyper-log-log based 
approach for a better estimate with high field cardinalities.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9674) Facet Values Sort Order: Index should ignore casing

2017-04-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-9674:
---
Component/s: (was: faceting)

> Facet Values Sort Order: Index should ignore casing
> ---
>
> Key: SOLR-9674
> URL: https://issues.apache.org/jira/browse/SOLR-9674
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Luke P Warwick
>  Labels: facet, features
>
> Index facet value sorting puts all values that start with a capital letter 
> before all values that start with a lower case letter.  
> Since users often don't know this pattern, nor that the value they are 
> looking for might start with a lower case letter, it leads to confusion and 
> bad experiences, in particular in E-commerce with the 'Brand' Facet.
> Desired behavior is that the order ignore case. If two values are otherwise 
> equal (Patagonia,patagonia), put the capital first.
> Example:
> Current Order: Anon,Marmot,Patagonia,Smith,Zoot,maloja,prAna
> Desired Order: Anon,Marmot,maloja,Patagonia,prAna,Smith,Zoot



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9674) Facet Values Sort Order: Index should ignore casing

2017-04-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9674?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-9674:
---
Issue Type: New Feature  (was: Bug)

> Facet Values Sort Order: Index should ignore casing
> ---
>
> Key: SOLR-9674
> URL: https://issues.apache.org/jira/browse/SOLR-9674
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Luke P Warwick
>  Labels: facet, features
>
> Index facet value sorting puts all values that start with a capital letter 
> before all values that start with a lower case letter.  
> Since users often don't know this pattern, nor that the value they are 
> looking for might start with a lower case letter, it leads to confusion and 
> bad experiences, in particular in E-commerce with the 'Brand' Facet.
> Desired behavior is that the order ignore case. If two values are otherwise 
> equal (Patagonia,patagonia), put the capital first.
> Example:
> Current Order: Anon,Marmot,Patagonia,Smith,Zoot,maloja,prAna
> Desired Order: Anon,Marmot,maloja,Patagonia,prAna,Smith,Zoot



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9120) o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- NoSuchFileException

2017-04-21 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979075#comment-15979075
 ] 

Hrishikesh Gadre commented on SOLR-9120:


I added following comments to SOLR-8587 long time ago. 


I ran into a similar problem (NoSuchFileException for the segments file) while 
working on Solr backup/restore. This problem can be avoided by using the 
following trick implemented in ReplicationHandler.
https://github.com/apache/lucene-solr/blob/7794fbd13f1a0edfff8f121fb1c6a01075eeef6a/solr/core/src/java/org/apache/solr/handler/ReplicationHandler.java#L537-L542
I see that LukeRequestHandler is not implementing similar strategy. 
https://github.com/apache/lucene-solr/blob/7794fbd13f1a0edfff8f121fb1c6a01075eeef6a/solr/core/src/java/org/apache/solr/handler/admin/LukeRequestHandler.java#L583
My guess is that if we change LukeRequestHandler to implement a strategy 
similar to ReplicationHandler, it should fix the problem. Although I don't 
fully understand why is that the case. Yonik Seeley Uwe Schindler you have any 
idea?



Ok I confirmed that the above mentioned approach fixes the NoSuchFileException 
for the segments file. Please refer to this quick patch,
https://github.com/hgadre/lucene-solr/commit/e780f63d4ce79b30f3379df3eb59021394080cc8

One open question at this point,
If we fetch the IndexCommit via solrCore.getDeletionPoplicy().getLatestCommit() 
API, then should we be using the corresponding IndexReader instance (instead of 
IndexSearcher.getIndexReader()) ?. Otherwise we may get inconsistent results. 
But after reviewing the code, it looks like IndexCommit.getReader() method may 
return null value. So I am not sure if we can rely on it. Any thoughts on this?


May be we can just use this as the fix? [~elyograg] any thoughts?

> o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- 
> NoSuchFileException
> 
>
> Key: SOLR-9120
> URL: https://issues.apache.org/jira/browse/SOLR-9120
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Markus Jelsma
>
> On Solr 6.0, we frequently see the following errors popping up:
> {code}
> java.nio.file.NoSuchFileException: 
> /var/lib/solr/logs_shard2_replica1/data/index/segments_2c5
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>   at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>   at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.Files.size(Files.java:2332)
>   at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:131)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:597)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:585)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:137)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> 

[jira] [Created] (SOLR-10547) implement min/max facet functions for string fields

2017-04-21 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-10547:
---

 Summary: implement min/max facet functions for string fields
 Key: SOLR-10547
 URL: https://issues.apache.org/jira/browse/SOLR-10547
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Reporter: Yonik Seeley


The min and max functions currently only work on numeric fields.  This JIRA is 
about adding support for String fields, with min/max defined as index order.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10480) Offset does not allow for full pagination in JSON Facet API

2017-04-21 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979060#comment-15979060
 ] 

Yonik Seeley commented on SOLR-10480:
-

I think this is probably caused by a bad estimate for numBuckets... meaning 
that if you keep paging, you should get all the values (we're just wrong about 
the number of values).

> Offset does not allow for full pagination in JSON Facet API
> ---
>
> Key: SOLR-10480
> URL: https://issues.apache.org/jira/browse/SOLR-10480
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, SolrCloud
>Affects Versions: 6.4.1
>Reporter: Maxime Darçot
>
> I have a SolrCloud cluster and when I use the JSON facet API to do term 
> faceting like this:
> bq. json.facet={"results":{"type": "terms", "field": "my_field", "limit": 
> 100, "offset": 100, "numBuckets": true}}
> it does work correctly. 
> However the _numBuckets_ tells me in return that I have more than 6 millions 
> buckets but as soon as I start to grow the _offset_ value to browse these 
> buckets, it doesn't return anything anymore (when I reach an offset of around 
> 300).
> What is even weirder is that if I put a bigger _limit_, like 10'000, I can 
> increase the _offset_ until around 29'000 before it doesn't return anything.
> And the returned _numBuckets_ doesn't change all the while.
> It is a big problem because we can't paginate till the end of the buckets.
> Might be related to SOLR-7452, I don't know...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10546) page facets by cursor/value

2017-04-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10546?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10546:

Component/s: Facet Module

> page facets by cursor/value
> ---
>
> Key: SOLR-10546
> URL: https://issues.apache.org/jira/browse/SOLR-10546
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>
> Paging facets can only currently be done by offset.  It can be more efficient 
> to allow paging by a cursor or by the value of the last bucket retrieved.
> This can also help issues such as SOLR-10480 which can lead to empty sets 
> returned because all buckets have been filtered by mincount.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10546) page facets by cursor/value

2017-04-21 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-10546:
---

 Summary: page facets by cursor/value
 Key: SOLR-10546
 URL: https://issues.apache.org/jira/browse/SOLR-10546
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Yonik Seeley


Paging facets can only currently be done by offset.  It can be more efficient 
to allow paging by a cursor or by the value of the last bucket retrieved.

This can also help issues such as   SOLR-10480 which can lead to empty sets 
returned because all buckets have been filtered by mincount.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10545) compound field faceting

2017-04-21 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-10545:
---

 Summary: compound field faceting
 Key: SOLR-10545
 URL: https://issues.apache.org/jira/browse/SOLR-10545
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Reporter: Yonik Seeley


There should be a way to facet on tuples of field values from different fields.
We could extend the syntax of terms facet from field:"field1" to 
field:"field1,field2,field3"
The bucket values could then be an array:
{code}
buckets: [
  {
val : [val1, val2, val3],
count : 42
  },
 ...
]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9527) Solr RESTORE api doesn't distribute the replicas uniformly

2017-04-21 Thread Stephen Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979043#comment-15979043
 ] 

Stephen Weiss commented on SOLR-9527:
-

Any chance we can get this patch merged in?  We really need this issue fixed, 
and it's made upgrading a serious pain.   Every new version we have to 
recompile our own Solr for this one little detail.  If you need it to be 
tested... well... we've used it in production for months and it's great.  
What's actually in master is no good.  Why didn't that need to be tested as 
much?

> Solr RESTORE api doesn't distribute the replicas uniformly 
> ---
>
> Key: SOLR-9527
> URL: https://issues.apache.org/jira/browse/SOLR-9527
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.1
>Reporter: Hrishikesh Gadre
>Assignee: Varun Thacker
> Attachments: SOLR-9527.patch, SOLR-9527.patch, SOLR-9527.patch, 
> SOLR-9527.patch, Solr 9527.pdf
>
>
> Please refer to this email thread for details,
> http://lucene.markmail.org/message/ycun4x5nx7lwj5sk?q=solr+list:org%2Eapache%2Elucene%2Esolr-user+order:date-backward=1



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9120) o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- NoSuchFileException

2017-04-21 Thread Stephen Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979029#comment-15979029
 ] 

Stephen Weiss commented on SOLR-9120:
-

I think this does have a real impact - we get these errors when doing backups 
and it causes the backups to fail, so then we have to repeat.  Sometimes it 
takes a few hours before we get a clean backup as a result.

> o.a.s.h.a.LukeRequestHandler Error getting file length for [segments_NNN] -- 
> NoSuchFileException
> 
>
> Key: SOLR-9120
> URL: https://issues.apache.org/jira/browse/SOLR-9120
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Markus Jelsma
>
> On Solr 6.0, we frequently see the following errors popping up:
> {code}
> java.nio.file.NoSuchFileException: 
> /var/lib/solr/logs_shard2_replica1/data/index/segments_2c5
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>   at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>   at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.Files.size(Files.java:2332)
>   at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:131)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:597)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:585)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:137)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian 

[jira] [Commented] (SOLR-10047) Mismatched Docvalue segments cause exception in Sorting/Facting; Uninvert per segment

2017-04-21 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979028#comment-15979028
 ] 

Hoss Man commented on SOLR-10047:
-

MergePolicy settings are normally randomized by the solr test configs.

If a test is dependent on controlling exact when a merge does/doesn't happen, 
you'll need to force the mergepolicy using something like 
{{NoMergePolicyFactory}}

Example from TestInPlaceUpdatesDistrib...

{code}
// we need consistent segments that aren't re-ordered on merge because we're
// asserting inplace updates happen by checking the internal [docid]
systemSetPropertySolrTestsMergePolicyFactory(NoMergePolicyFactory.class.getName());

// HACK: Don't use a RandomMergePolicy, but only use the mergePolicyFactory 
that we've just set
System.setProperty(SYSTEM_PROPERTY_SOLR_TESTS_USEMERGEPOLICYFACTORY, "true");
System.setProperty(SYSTEM_PROPERTY_SOLR_TESTS_USEMERGEPOLICY, "false");
{code}

(the last two lines are only needed because we're phasing out MergePolicy for 
MergePolicyFactory)

> Mismatched Docvalue segments cause exception in Sorting/Facting; Uninvert per 
> segment
> -
>
> Key: SOLR-10047
> URL: https://issues.apache.org/jira/browse/SOLR-10047
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Keith Laban
>Assignee: Shalin Shekhar Mangar
> Fix For: 6.6, master (7.0)
>
>
> The configuration of UninvertingReader in SolrIndexSearch creates a global 
> mapping for the directory for fields to uninvert. If docvalues are enabled on 
> a field the creation of a new segment will cause the query to fail when 
> faceting/sorting on the recently docvalue enabled field. This happens because 
> the UninvertingReader is configured globally across the entire directory, and 
> a single segment containing DVs for a field will incorrectly indicate that 
> all segments contain DVs.
> This patch addresses the incorrect behavior by determining the fields to be 
> uninverted on a per-segment basis.
> With the fix, it is still recommended that a reindexing occur as data loss 
> will when a DV and non-DV segment are merged, SOLR-10046 addresses this 
> behavior. This fix is to be a stop gap for the time between enabling 
> docvalues and the duration of a reindex. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 827 - Still Unstable!

2017-04-21 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/827/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.schema.TestUseDocValuesAsStored.testMultipleSearchResults

Error Message:
mismatch: 'myid1'!='myid' @ response/docs/[0]/id

Stack Trace:
java.lang.RuntimeException: mismatch: 'myid1'!='myid' @ response/docs/[0]/id
at 
__randomizedtesting.SeedInfo.seed([E7F98B92BAB3A8DE:D5D38C0C424D8C07]:0)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:983)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:930)
at 
org.apache.solr.schema.TestUseDocValuesAsStored.testMultipleSearchResults(TestUseDocValuesAsStored.java:243)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 13027 lines...]
   [junit4] Suite: org.apache.solr.schema.TestUseDocValuesAsStored
   [junit4]   2> Creating dataDir: 

[jira] [Updated] (LUCENE-7796) Make reThrow idiom declare return type of Throwable so callers may use it in a way that compiler knows subsequent code is unreachable

2017-04-21 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7796?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-7796:
-
Description: 
A spinoff from LUCENE-7792: reThrow can be declared to return an unchecked 
exception so that callers can choose to use {{throw reThrow(...)}} as an idiom 
to let the compiler know any subsequent code will be unreachable.


  was:A spinoff from LUCENE-7792: reThrow can return an unchecked exception so 
that one can use {{throw reThrow(...)}} idiom to let the compiler know 
subsequent code would be unreachable.

Summary: Make reThrow idiom declare return type of Throwable so callers 
may use it in a way that compiler knows subsequent code is unreachable  (was: 
Make reThrow idiom return Throwable to enforce code unreachability)

clarified summary & description

> Make reThrow idiom declare return type of Throwable so callers may use it in 
> a way that compiler knows subsequent code is unreachable
> -
>
> Key: LUCENE-7796
> URL: https://issues.apache.org/jira/browse/LUCENE-7796
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 6.x, master (7.0)
>
>
> A spinoff from LUCENE-7792: reThrow can be declared to return an unchecked 
> exception so that callers can choose to use {{throw reThrow(...)}} as an 
> idiom to let the compiler know any subsequent code will be unreachable.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10544) Implement term-df based short-circuiting when sorting streaming facets by count

2017-04-21 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-10544:
---

 Summary: Implement term-df based short-circuiting when sorting 
streaming facets by count
 Key: SOLR-10544
 URL: https://issues.apache.org/jira/browse/SOLR-10544
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Reporter: Yonik Seeley


If streaming facets are sorted by count, we have the opportunity to use 
information in the index (namely term-df) to short-circuit terms that have no 
chance of making it into the current top-N in the priority queue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10543) Implement sorting for streaming facets

2017-04-21 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-10543:
---

 Summary: Implement sorting for streaming facets
 Key: SOLR-10543
 URL: https://issues.apache.org/jira/browse/SOLR-10543
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Reporter: Yonik Seeley


Using method=stream for JSON field faceting avoids building up any data 
structures in memory and calculates each bucket individually before writing it 
out to the response.  This comes at a cost of only being able to produce the 
buckets in index order.

We should add support for sorting by different criteria (count or facet 
function), do the minimum needed to calculate the bucket ordering, and then 
stream the rest (including subfacets).




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-10286) Declare a field as "large", don't keep value in the document cache

2017-04-21 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-10286.
---
   Resolution: Fixed
Fix Version/s: (was: 6.6)
   6.5

Marking closed towards v6.5.  If there are bugs/tests that show a problem then 
new issue(s) need to be filed.

> Declare a field as "large", don't keep value in the document cache
> --
>
> Key: SOLR-10286
> URL: https://issues.apache.org/jira/browse/SOLR-10286
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: master (7.0), 6.5
>
> Attachments: SOLR-10286_large_fields.patch, 
> SOLR-10286_large_fields.patch, tests-failures.txt
>
>
> (part of umbrella issue SOLR-10117)
> This adds a field to be declared as "large" in the schema.  In the 
> {{SolrIndexSearcher.doc(...)}} handling, these fields are lazily fetched from 
> Lucene.  Unlike {{LazyDocument.LazyField}}, it's not cached after first-use 
> unless the value is "small" < 512KB by default.  "large" can only be used 
> when its stored="true" and multiValued="false" and the field is otherwise 
> compatible (basically not a numeric field) -- you'll get a helpful exception 
> if it's unsupported. BinaryField is not yet supported at this time; it could 
> be in the future.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10539) Ability to sort range facets

2017-04-21 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978929#comment-15978929
 ] 

Yonik Seeley commented on SOLR-10539:
-

We should probably add the ability to limit (and perhaps specify an offset) as 
well, just like field facets.

> Ability to sort range facets
> 
>
> Key: SOLR-10539
> URL: https://issues.apache.org/jira/browse/SOLR-10539
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>
> Just like term facets, one should be able to sort the buckets of range facets 
> by count or by any metric (facet-function) generated per-bucket.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7473) range facet should support mincount

2017-04-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-7473.

Resolution: Fixed

> range facet should support mincount
> ---
>
> Key: SOLR-7473
> URL: https://issues.apache.org/jira/browse/SOLR-7473
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Reporter: Yonik Seeley
>Priority: Minor
> Fix For: 5.2
>
> Attachments: SOLR-7473.patch
>
>
> Just like field faceting, range faceting should support the "mincount" 
> variable to limit buckets by minimum count.
> Example:
> {code}
> json.facet={
>   prices : {
> type : range,
> field : price,
> mincount : 1,
> start:0, end:100, gap:10
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10415) Within solr-core, debug/trace level logging should use parameterized log messages

2017-04-21 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978915#comment-15978915
 ] 

Mike Drob commented on SOLR-10415:
--

Great catch, Michael!

> Within solr-core, debug/trace level logging should use parameterized log 
> messages
> -
>
> Key: SOLR-10415
> URL: https://issues.apache.org/jira/browse/SOLR-10415
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Braun
>Priority: Trivial
>
> Noticed in several samplings of an active Solr that several debug statements 
> were taking decently measurable time because of the time of the .toString 
> even when the log.debug() statement would not output because it was 
> effectively INFO or higher. Using parameterized logging statements, ie 
> 'log.debug("Blah {}", o)' will avoid incurring that cost.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10528) Use docvalue for range faceting in JSON facet API

2017-04-21 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978912#comment-15978912
 ] 

Yonik Seeley commented on SOLR-10528:
-

[~liniku] [~rustamhsmv] While reviewing facet JIRAs, I ran across SOLR-9868, 
which appears to have the same goal (although I haven't reviewed the code).

> Use docvalue for range faceting in JSON facet API
> -
>
> Key: SOLR-10528
> URL: https://issues.apache.org/jira/browse/SOLR-10528
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Affects Versions: 6.5
>Reporter: Kensho Hirasawa
>Priority: Minor
> Attachments: SOLR-10528.patch
>
>
> Range faceting in JSON facet API has only one implementation. In the 
> implementation, all buckets are allocated and then range queries are executed 
> for all the buckets. Therefore, memory usage and computational cost of range 
> facet can be very high if range is wide and gap is narrow. 
> I think range faceting in JSON facet should have the implementation which 
> uses DocValues instead of inverted indices. By scanning DocValues, we can 
> execute range facets much more efficiently especially when the number of 
> buckets is large.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 6.5.1 RC2

2017-04-21 Thread Steve Rowe
+1

Lucene docs, changes and javadocs look good, and the smoke tester is happy: 
SUCCESS! [0:30:36.220143]

--
Steve
www.lucidworks.com

> On Apr 21, 2017, at 9:45 AM, jim ferenczi  wrote:
> 
> Please vote for release candidate 2 for Lucene/Solr 6.5.1
> 
> The artifacts can be downloaded from:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.5.1-RC2-revcd1f23c63abe03ae650c75ec8ccb37762806cc75
> 
> You can run the smoke tester directly with this command:
> 
> python3 -u dev-tools/scripts/smokeTestRelease.py \
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-6.5.1-RC2-revcd1f23c63abe03ae650c75ec8ccb37762806cc75
> 
> Sorry about the false start yesterday.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10541) Range facet by function

2017-04-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10541:

Component/s: Facet Module

> Range facet by function
> ---
>
> Key: SOLR-10541
> URL: https://issues.apache.org/jira/browse/SOLR-10541
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>
> We should have the ability to facet-by-function-query in range facets.
> We can reuse existing function query syntax for the function.  Perhaps a 
> parameter like func:"sum(a,b)" instead of the normal field:"myfield" in the 
> range facet?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10273) Re-order largest field values last in Lucene Document

2017-04-21 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978882#comment-15978882
 ] 

Michael Braun commented on SOLR-10273:
--

Please ignore my comment - wrong ticket!

> Re-order largest field values last in Lucene Document
> -
>
> Key: SOLR-10273
> URL: https://issues.apache.org/jira/browse/SOLR-10273
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.5
>
> Attachments: SOLR_10273_DocumentBuilder_move_longest_to_last.patch
>
>
> (part of umbrella issue SOLR-10117)
> In Solr's {{DocumentBuilder}}, at the very end, we should move the field 
> value(s) associated with the largest field (assuming "stored") to be last.  
> Lucene's default stored value codec can avoid reading and decompressing  the 
> last field value when it's not requested.  (As of LUCENE-6898).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10542) facet by day or month

2017-04-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10542:

Component/s: Facet Module

> facet by day or month
> -
>
> Key: SOLR-10542
> URL: https://issues.apache.org/jira/browse/SOLR-10542
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>
> Given a normal date field, it would be nice to be able to facet by the day of 
> the week (i.e. 7 buckets), the month, or perhaps the day of the month?
> The interface could be perhaps a modification to field (or maybe range) 
> faceting by specifying a function.
> func : "dayofweek(mydatefield)"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10542) facet by day or month

2017-04-21 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-10542:
---

 Summary: facet by day or month
 Key: SOLR-10542
 URL: https://issues.apache.org/jira/browse/SOLR-10542
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Yonik Seeley


Given a normal date field, it would be nice to be able to facet by the day of 
the week (i.e. 7 buckets), the month, or perhaps the day of the month?

The interface could be perhaps a modification to field (or maybe range) 
faceting by specifying a function.
func : "dayofweek(mydatefield)"



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10541) Range facet by function

2017-04-21 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-10541:
---

 Summary: Range facet by function
 Key: SOLR-10541
 URL: https://issues.apache.org/jira/browse/SOLR-10541
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Yonik Seeley


We should have the ability to facet-by-function-query in range facets.

We can reuse existing function query syntax for the function.  Perhaps a 
parameter like func:"sum(a,b)" instead of the normal field:"myfield" in the 
range facet?



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10540) Custom ranges for range facets

2017-04-21 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-10540:
---

 Summary: Custom ranges for range facets
 Key: SOLR-10540
 URL: https://issues.apache.org/jira/browse/SOLR-10540
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Reporter: Yonik Seeley


One should be able to specify custom ranges for range facets.
Those ranges should also be able to overlap.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10273) Re-order largest field values last in Lucene Document

2017-04-21 Thread Michael Braun (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978856#comment-15978856
 ] 

Michael Braun commented on SOLR-10273:
--

Saw this was reopened, is this not fully implemented in the 6.5.1 RC? 

> Re-order largest field values last in Lucene Document
> -
>
> Key: SOLR-10273
> URL: https://issues.apache.org/jira/browse/SOLR-10273
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.5
>
> Attachments: SOLR_10273_DocumentBuilder_move_longest_to_last.patch
>
>
> (part of umbrella issue SOLR-10117)
> In Solr's {{DocumentBuilder}}, at the very end, we should move the field 
> value(s) associated with the largest field (assuming "stored") to be last.  
> Lucene's default stored value codec can avoid reading and decompressing  the 
> last field value when it's not requested.  (As of LUCENE-6898).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7790) please make org.egothor.stemmer.Trie.add() public

2017-04-21 Thread Elie Roux (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978852#comment-15978852
 ] 

Elie Roux commented on LUCENE-7790:
---

It has been changed by apache in this commit: 
https://github.com/apache/lucene-solr/commit/04f3188bf9b8ce059f72399bad721d9190ed2dc0
 so what I'm asking is to revert this change

> please make org.egothor.stemmer.Trie.add() public
> -
>
> Key: LUCENE-7790
> URL: https://issues.apache.org/jira/browse/LUCENE-7790
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 6.4.1
> Environment: Linux, Maven
>Reporter: Elie Roux
>  Labels: newbie
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> Please make the method please make org.egothor.stemmer.Trie.add() public so 
> that other analysers can build trees



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7793) smokeTestRelease.py should run documentation-lint

2017-04-21 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978857#comment-15978857
 ] 

Steve Rowe commented on LUCENE-7793:


bq. I'm running this revised script now.

The smoker succeeded with the patch, running "ant validate" on the Lucene 
source dist and "ant validate documentation-lint" on the Solr source dist.

> smokeTestRelease.py should run documentation-lint
> -
>
> Key: LUCENE-7793
> URL: https://issues.apache.org/jira/browse/LUCENE-7793
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7793.patch, LUCENE-7793.patch
>
>
> {{smokeTestRelease.py}} runs {{ant validate}} on source releases, but this 
> doesn't include the {{documentation-lint}} target, which is part of {{ant 
> precommit}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10539) Ability to sort range facetgs

2017-04-21 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-10539:
---

 Summary: Ability to sort range facetgs
 Key: SOLR-10539
 URL: https://issues.apache.org/jira/browse/SOLR-10539
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Reporter: Yonik Seeley


Just like term facets, one should be able to sort the buckets of range facets 
by count or by any metric (facet-function) generated per-bucket.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10539) Ability to sort range facets

2017-04-21 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-10539:

Summary: Ability to sort range facets  (was: Ability to sort range facetgs)

> Ability to sort range facets
> 
>
> Key: SOLR-10539
> URL: https://issues.apache.org/jira/browse/SOLR-10539
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>
> Just like term facets, one should be able to sort the buckets of range facets 
> by count or by any metric (facet-function) generated per-bucket.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7789) replace & forbid "new FileInputStream" and "new FileOutputStream" with Files.newInputStream & Files.newOutputStream

2017-04-21 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978840#comment-15978840
 ] 

Adrien Grand commented on LUCENE-7789:
--

+1 We could still use the {{@SuppressForbidden}} annotation if there are rare 
cases that actually need a FileInputStream or a FileOutputStream.

> replace & forbid "new FileInputStream" and "new FileOutputStream" with 
> Files.newInputStream & Files.newOutputStream
> ---
>
> Key: LUCENE-7789
> URL: https://issues.apache.org/jira/browse/LUCENE-7789
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Hoss Man
>
> I haven't looked into the details of this much, but saw these links today and 
> thought it would be worth opening a jira for discussion...
> * 
> https://dzone.com/articles/fileinputstream-fileoutputstream-considered-harmful
> * https://issues.jenkins-ci.org/browse/JENKINS-42934
> * https://bugs.openjdk.java.net/browse/JDK-8080225
> The crux of the issue being that the "FileInputStream" and "FileOutputStream" 
> classes have finalizer methods with GC overhead that can be avoided using 
> Files.newInputStream and Files.newOutputStream in their place.
> This seems like it would make these methods good candidates for forbidding in 
> lucene/solr (if possible).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10538) Phonetic Filter BM should respect the keyword attribute

2017-04-21 Thread Uwe Reh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Reh updated SOLR-10538:
---
Description: 
In some cases it is necessary to index unchanged keywords and phonetic mapped 
text in just one field. In my example it is needed, to add ids in the form 
"prefix+number"
The current implementation of the BeiderMorseFilter strips the digits and maps 
the remaining letters. As result, all ids with the same prefix become equal. :-(

To avoid this, the BeiderMorseFilter should respect the keyword attribute.
I prepared a patch for the implementation and the test class, which adds the 
evaluation of keywordAtt.isKeyword().

 

  was:
In some cases it is necessary to index unchanged keywords and phonetic mapped 
text in just one field. In my example it is needed, to add ids in the form 
"prefix+number"
The current implementation of the BeiderMorseFilter strips the digits and maps 
the remaining letters. As result all ids with the same prefix become equal. :-(

To avoid this, the BeiderMorseFilter should respect the keyword attribute.
I prepared a patch for the implementation and the test class, which adds the 
evaluation of keywordAtt.isKeyword().

 


> Phonetic Filter BM should respect the keyword attribute
> ---
>
> Key: SOLR-10538
> URL: https://issues.apache.org/jira/browse/SOLR-10538
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Reporter: Uwe Reh
>Priority: Minor
>  Labels: keyword, patch
> Attachments: bmf.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> In some cases it is necessary to index unchanged keywords and phonetic mapped 
> text in just one field. In my example it is needed, to add ids in the form 
> "prefix+number"
> The current implementation of the BeiderMorseFilter strips the digits and 
> maps the remaining letters. As result, all ids with the same prefix become 
> equal. :-(
> To avoid this, the BeiderMorseFilter should respect the keyword attribute.
> I prepared a patch for the implementation and the test class, which adds the 
> evaluation of keywordAtt.isKeyword().
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10538) Phonetic Filter BM should respect the keyword attribute

2017-04-21 Thread Uwe Reh (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Reh updated SOLR-10538:
---
Attachment: bmf.patch

> Phonetic Filter BM should respect the keyword attribute
> ---
>
> Key: SOLR-10538
> URL: https://issues.apache.org/jira/browse/SOLR-10538
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Reporter: Uwe Reh
>Priority: Minor
>  Labels: keyword, patch
> Attachments: bmf.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> In some cases it is necessary to index unchanged keywords and phonetic mapped 
> text in just one field. In my example it is needed, to add ids in the form 
> "prefix+number"
> The current implementation of the BeiderMorseFilter strips the digits and 
> maps the remaining letters. As result all ids with the same prefix become 
> equal. :-(
> To avoid this, the BeiderMorseFilter should respect the keyword attribute.
> I prepared a patch for the implementation and the test class, which adds the 
> evaluation of keywordAtt.isKeyword().
>  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 1777 - Still Unstable

2017-04-21 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1777/

1 tests failed.
FAILED:  org.apache.solr.client.solrj.TestSolrJErrorHandling.testWithXml

Error Message:
expected:<530> but was:<520>

Stack Trace:
java.lang.AssertionError: expected:<530> but was:<520>
at 
__randomizedtesting.SeedInfo.seed([A8FF0DBD4C6F3361:31DB44BCE692A0C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.client.solrj.TestSolrJErrorHandling.doThreads(TestSolrJErrorHandling.java:185)
at 
org.apache.solr.client.solrj.TestSolrJErrorHandling.doIt(TestSolrJErrorHandling.java:200)
at 
org.apache.solr.client.solrj.TestSolrJErrorHandling.testWithXml(TestSolrJErrorHandling.java:110)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Commented] (LUCENE-7790) please make org.egothor.stemmer.Trie.add() public

2017-04-21 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15978828#comment-15978828
 ] 

Adrien Grand commented on LUCENE-7790:
--

I think we should avoid applying these kinds of modifications to forked code. 
Maybe you could fork this code directly in your application to suit it to your 
needs?

> please make org.egothor.stemmer.Trie.add() public
> -
>
> Key: LUCENE-7790
> URL: https://issues.apache.org/jira/browse/LUCENE-7790
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 6.4.1
> Environment: Linux, Maven
>Reporter: Elie Roux
>  Labels: newbie
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> Please make the method please make org.egothor.stemmer.Trie.add() public so 
> that other analysers can build trees



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-10538) Phonetic Filter BM should respect the keyword attribute

2017-04-21 Thread Uwe Reh (JIRA)
Uwe Reh created SOLR-10538:
--

 Summary: Phonetic Filter BM should respect the keyword attribute
 Key: SOLR-10538
 URL: https://issues.apache.org/jira/browse/SOLR-10538
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Schema and Analysis
Reporter: Uwe Reh
Priority: Minor


In some cases it is necessary to index unchanged keywords and phonetic mapped 
text in just one field. In my example it is needed, to add ids in the form 
"prefix+number"
The current implementation of the BeiderMorseFilter strips the digits and maps 
the remaining letters. As result all ids with the same prefix become equal. :-(

To avoid this, the BeiderMorseFilter should respect the keyword attribute.
I prepared a patch for the implementation and the test class, which adds the 
evaluation of keywordAtt.isKeyword().

 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7793) smokeTestRelease.py should run documentation-lint

2017-04-21 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-7793:
---
Attachment: LUCENE-7793.patch

The first patch failed when I ran it against the latest 6.5.1 RC2 because 
Lucene source dist doesn't have the helper scripts for {{documentation-lint}} 
that are under {{dev-tools/}}, which is not included:

{noformat}
-documentation-lint:
 [echo] checking for broken html...
[jtidy] Checking for broken html (such as invalid tags)...
   [delete] Deleting directory 
/tmp/smoke_lucene_6.5.1_cd1f23c63abe03ae650c75ec8ccb37762806cc75/unpack/lucene-6.5.1/build/jtidy_tmp
 [echo] Checking for broken links...
 [exec] python3: can't open file 
'/tmp/smoke_lucene_6.5.1_cd1f23c63abe03ae650c75ec8ccb37762806cc75/unpack/dev-tools/scripts/checkJavadocLinks.py':
 [Errno 2] No such file or directory
{noformat}

This version of the patch only attempts to run {{documentation-lint}} when the 
project is Solr, which has {{dev-tools/}}.

I'm running this revised script now.

> smokeTestRelease.py should run documentation-lint
> -
>
> Key: LUCENE-7793
> URL: https://issues.apache.org/jira/browse/LUCENE-7793
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
> Attachments: LUCENE-7793.patch, LUCENE-7793.patch
>
>
> {{smokeTestRelease.py}} runs {{ant validate}} on source releases, but this 
> doesn't include the {{documentation-lint}} target, which is part of {{ant 
> precommit}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >