[jira] [Commented] (SOLR-14471) base replica selection strategy not applied to "last place" shards.preference matches

2020-05-13 Thread Tomas Eduardo Fernandez Lobbe (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106882#comment-17106882
 ] 

Tomas Eduardo Fernandez Lobbe commented on SOLR-14471:
--

Thanks Michael. PR looks good to me.

> base replica selection strategy not applied to "last place" shards.preference 
> matches
> -
>
> Key: SOLR-14471
> URL: https://issues.apache.org/jira/browse/SOLR-14471
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0), 8.3
>Reporter: Michael Gibney
>Assignee: Tomas Eduardo Fernandez Lobbe
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When {{shards.preferences}} is specified, all inherently equivalent groups of 
> replicas should fall back to being sorted by the {{replica.base}} strategy 
> (either random or some variant of "stable"). This currently works for every 
> group of "equivalent" replicas, with the exception of "last place" matches.
> This is easy to overlook, because usually it's the "first place" matches that 
> will be selected for the purpose of actually executing distributed requests; 
> but it's still a bug, and is especially problematic when "last place matches" 
> == "first place matches" – e.g. when {{shards.preference}} specified matches 
> _all_ available replicas.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14471) base replica selection strategy not applied to "last place" shards.preference matches

2020-05-13 Thread Tomas Eduardo Fernandez Lobbe (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomas Eduardo Fernandez Lobbe reassigned SOLR-14471:


Assignee: Tomas Eduardo Fernandez Lobbe

> base replica selection strategy not applied to "last place" shards.preference 
> matches
> -
>
> Key: SOLR-14471
> URL: https://issues.apache.org/jira/browse/SOLR-14471
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0), 8.3
>Reporter: Michael Gibney
>Assignee: Tomas Eduardo Fernandez Lobbe
>Priority: Minor
>
> When {{shards.preferences}} is specified, all inherently equivalent groups of 
> replicas should fall back to being sorted by the {{replica.base}} strategy 
> (either random or some variant of "stable"). This currently works for every 
> group of "equivalent" replicas, with the exception of "last place" matches.
> This is easy to overlook, because usually it's the "first place" matches that 
> will be selected for the purpose of actually executing distributed requests; 
> but it's still a bug, and is especially problematic when "last place matches" 
> == "first place matches" – e.g. when {{shards.preference}} specified matches 
> _all_ available replicas.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] tflobbe commented on a change in pull request #1507: SOLR-14471: properly apply base replica ordering to last-place shards…

2020-05-13 Thread GitBox


tflobbe commented on a change in pull request #1507:
URL: https://github.com/apache/lucene-solr/pull/1507#discussion_r424864796



##
File path: 
solr/solrj/src/test/org/apache/solr/client/solrj/routing/RequestReplicaListTransformerGeneratorTest.java
##
@@ -88,6 +88,19 @@ public void replicaTypeAndReplicaBase() {
 )
 );
 
+// Add a PULL replica so that there's a tie for "last place"
+replicas.add(
+new Replica(
+"node4",

Review comment:
   node5?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mocobeta commented on pull request #1488: LUCENE-9321: Refactor renderJavadoc to allow relative links with multiple Gradle tasks

2020-05-13 Thread GitBox


mocobeta commented on pull request #1488:
URL: https://github.com/apache/lucene-solr/pull/1488#issuecomment-628341858


   Looks excellent to me too.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14482) Fix compile-time warnings in solr/core/search/facet

2020-05-13 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-14482:
--
Summary: Fix compile-time warnings in solr/core/search/facet  (was: Fix 
auxilliary class warnings in solr/core/search/facet)

> Fix compile-time warnings in solr/core/search/facet
> ---
>
> Key: SOLR-14482
> URL: https://issues.apache.org/jira/browse/SOLR-14482
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> Taking this on next since I've just worked on it in SOLR-10810.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14351) Harden MDCLoggingContext.clear depth tracking

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106763#comment-17106763
 ] 

ASF subversion and git services commented on SOLR-14351:


Commit f00b38d004831512de21ed1f8289b2e939dafbd3 in lucene-solr's branch 
refs/heads/branch_8x from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f00b38d ]

SOLR-14351: commitScheduler was missing MDC logging (#1498)

(cherry picked from commit 4b9808a03d6c8ee1f1f71487372a689b6c5f9798)


> Harden MDCLoggingContext.clear depth tracking
> -
>
> Key: SOLR-14351
> URL: https://issues.apache.org/jira/browse/SOLR-14351
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.6
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> MDCLoggingContext tracks recursive calls and only clears when the recursion 
> level is back down to 0.  If a caller forgets to register and ends up calling 
> clear any ways, then this can mess things up.  Additionally I found at least 
> one place this is occurring, which led me to investigate this matter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14351) Harden MDCLoggingContext.clear depth tracking

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106761#comment-17106761
 ] 

ASF subversion and git services commented on SOLR-14351:


Commit 4b9808a03d6c8ee1f1f71487372a689b6c5f9798 in lucene-solr's branch 
refs/heads/master from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4b9808a ]

SOLR-14351: commitScheduler was missing MDC logging (#1498)



> Harden MDCLoggingContext.clear depth tracking
> -
>
> Key: SOLR-14351
> URL: https://issues.apache.org/jira/browse/SOLR-14351
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.6
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> MDCLoggingContext tracks recursive calls and only clears when the recursion 
> level is back down to 0.  If a caller forgets to register and ends up calling 
> clear any ways, then this can mess things up.  Additionally I found at least 
> one place this is occurring, which led me to investigate this matter.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley merged pull request #1498: SOLR-14351: commitScheduler was missing MDC logging

2020-05-13 Thread GitBox


dsmiley merged pull request #1498:
URL: https://github.com/apache/lucene-solr/pull/1498


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler on one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * bit: The "drill bit" is the Streaming Expression sent to export handler to 
be executed.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
bit=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl="a,b,c", 
 sort="a desc, b desc", 
 bit=rollup(input(), over="a,b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * bit: The "drill bit" is the Streaming Expression sent to export handler to 
be executed.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
bit=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl="a,b,c", 
 sort="a desc, b desc", 
 bit=rollup(input(), over="a,b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler on one 
> replica in each shard of a collection and pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * bit: The "drill bit" is the Streaming Expression sent to export handler to 
> be executed.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
> bit=rollup(input(), over="a,b", sum(c))) {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl="a,b,c", 
>  sort="a desc, b desc", 
>  bit=rollup(input(), over="a,b", sum(c))),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality by 
> pushing down the first level of aggregation into the /export handler.
>  
>  



--

[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler on one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * bit: The "drill bit" is the Streaming Expression sent to the /export handler 
to be executed.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
bit=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl="a,b,c", 
 sort="a desc, b desc", 
 bit=rollup(input(), over="a,b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler on one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * bit: The "drill bit" is the Streaming Expression sent to export handler to 
be executed.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
bit=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl="a,b,c", 
 sort="a desc, b desc", 
 bit=rollup(input(), over="a,b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler on one 
> replica in each shard of a collection and pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * bit: The "drill bit" is the Streaming Expression sent to the /export 
> handler to be executed.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
> bit=rollup(input(), over="a,b", sum(c))) {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl="a,b,c", 
>  sort="a desc, b desc", 
>  bit=rollup(input(), over="a,b", sum(c))),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality by 
> pushing down the first level of aggregation into the /export handler.
>  

[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * bit: The "drill bit" is the Streaming Expression sent to export handler to 
be executed.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
bit=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl="a,b,c", 
 sort="a desc, b desc", 
 bit=rollup(input(), over="a,b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
bit=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl="a,b,c", 
 sort="a desc, b desc", 
 bit=rollup(input(), over="a,b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler in one 
> replica in each shard of a collection and pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * bit: The "drill bit" is the Streaming Expression sent to export handler to 
> be executed.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
> bit=rollup(input(), over="a,b", sum(c))) {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl="a,b,c", 
>  sort="a desc, b desc", 
>  bit=rollup(input(), over="a,b", sum(c))),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality by 
> pushing down the first level of aggregation into the /export handler.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
bit=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl="a,b,c", 
 sort="a desc, b desc", 
 bit=rollup(input(), over="a,b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set before the tuples hit the 
network. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl="a,b,c", 
 sort="a desc, b desc", 
 expr=rollup(input(), over="a,b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler in one 
> replica in each shard of a collection and pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
> bit=rollup(input(), over="a,b", sum(c))) {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl="a,b,c", 
>  sort="a desc, b desc", 
>  bit=rollup(input(), over="a,b", sum(c))),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality by 
> pushing down the first level of aggregation into the /export handler.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Created] (SOLR-14482) Fix auxilliary class warnings in solr/core/search/facet

2020-05-13 Thread Erick Erickson (Jira)
Erick Erickson created SOLR-14482:
-

 Summary: Fix auxilliary class warnings in solr/core/search/facet
 Key: SOLR-14482
 URL: https://issues.apache.org/jira/browse/SOLR-14482
 Project: Solr
  Issue Type: Sub-task
Reporter: Erick Erickson
Assignee: Erick Erickson


Taking this on next since I've just worked on it in SOLR-10810.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Reopened] (SOLR-14426) forbidden api error during precommit DateMathFunction

2020-05-13 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reopened SOLR-14426:
---

I thought I'd reopen this to address David's comments and  resolve the backport 
question.

> forbidden api error during precommit DateMathFunction
> -
>
> Key: SOLR-14426
> URL: https://issues.apache.org/jira/browse/SOLR-14426
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When running `./gradlew precommit` I'll occasionally see
> {code}
> * What went wrong:
> Execution failed for task ':solr:contrib:analytics:forbiddenApisMain'.
> > de.thetaphi.forbiddenapis.ForbiddenApiException: Check for forbidden API 
> > calls failed while scanning class 
> > 'org.apache.solr.analytics.function.mapping.DateMathFunction' 
> > (DateMathFunction.java): java.lang.ClassNotFoundException: 
> > org.apache.solr.analytics.function.mapping.DateMathValueFunction (while 
> > looking up details about referenced class 
> > 'org.apache.solr.analytics.function.mapping.DateMathValueFunction')
> {code}
> `./gradlew clean` fixes this, but I don't understand what or why this 
> happens. Feels like a gradle issue?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14475) Fix deprecation warnings resulting from upgrading commons cli to 1.4

2020-05-13 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-14475.
---
Fix Version/s: 8.6
   Resolution: Fixed

This was really almost all SolrCLI.java, if there are other deprecations we'll 
get to them eventually.

> Fix deprecation warnings resulting from upgrading commons cli to 1.4
> 
>
> Key: SOLR-14475
> URL: https://issues.apache.org/jira/browse/SOLR-14475
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.6
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14475) Fix deprecation warnings resulting from upgrading commons cli to 1.4

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106732#comment-17106732
 ] 

ASF subversion and git services commented on SOLR-14475:


Commit 88f14e212356701220c8d2335b57409af548535e in lucene-solr's branch 
refs/heads/branch_8x from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=88f14e2 ]

SOLR-14475: Fix deprecation warnings resulting from upgrading commons cli to 1.4

(cherry picked from commit 687dd42f5745589f10949bc4534c260a2e87b47c)


> Fix deprecation warnings resulting from upgrading commons cli to 1.4
> 
>
> Key: SOLR-14475
> URL: https://issues.apache.org/jira/browse/SOLR-14475
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14475) Fix deprecation warnings resulting from upgrading commons cli to 1.4

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106728#comment-17106728
 ] 

ASF subversion and git services commented on SOLR-14475:


Commit 687dd42f5745589f10949bc4534c260a2e87b47c in lucene-solr's branch 
refs/heads/master from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=687dd42 ]

SOLR-14475: Fix deprecation warnings resulting from upgrading commons cli to 1.4


> Fix deprecation warnings resulting from upgrading commons cli to 1.4
> 
>
> Key: SOLR-14475
> URL: https://issues.apache.org/jira/browse/SOLR-14475
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13749) Implement support for joining across collections with multiple shards ( XCJF )

2020-05-13 Thread Dan Fox (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106711#comment-17106711
 ] 

Dan Fox commented on SOLR-13749:


[~dsmiley], [~gus] I opened a new PR #1514 that consolidates the 
cross-collection join query into the existing join query parser.  Let us know 
what you think.

> Implement support for joining across collections with multiple shards ( XCJF )
> --
>
> Key: SOLR-13749
> URL: https://issues.apache.org/jira/browse/SOLR-13749
> Project: Solr
>  Issue Type: New Feature
>Reporter: Kevin Watters
>Assignee: Gus Heck
>Priority: Blocker
> Fix For: 8.6
>
> Attachments: 2020-03 Smiley with ASF hat.jpeg
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This ticket includes 2 query parsers.
> The first one is the "Cross collection join filter"  (XCJF) parser. This is 
> the "Cross-collection join filter" query parser. It can do a call out to a 
> remote collection to get a set of join keys to be used as a filter against 
> the local collection.
> The second one is the Hash Range query parser that you can specify a field 
> name and a hash range, the result is that only the documents that would have 
> hashed to that range will be returned.
> This query parser will do an intersection based on join keys between 2 
> collections.
> The local collection is the collection that you are searching against.
> The remote collection is the collection that contains the join keys that you 
> want to use as a filter.
> Each shard participating in the distributed request will execute a query 
> against the remote collection.  If the local collection is setup with the 
> compositeId router to be routed on the join key field, a hash range query is 
> applied to the remote collection query to only match the documents that 
> contain a potential match for the documents that are in the local shard/core. 
>  
>  
> Here's some vocab to help with the descriptions of the various parameters.
> ||Term||Description||
> |Local Collection|This is the main collection that is being queried.|
> |Remote Collection|This is the collection that the XCJFQuery will query to 
> resolve the join keys.|
> |XCJFQuery|The lucene query that executes a search to get back a set of join 
> keys from a remote collection|
> |HashRangeQuery|The lucene query that matches only the documents whose hash 
> code on a field falls within a specified range.|
>  
>  
> ||Param ||Required ||Description||
> |collection|Required|The name of the external Solr collection to be queried 
> to retrieve the set of join key values ( required )|
> |zkHost|Optional|The connection string to be used to connect to Zookeeper.  
> zkHost and solrUrl are both optional parameters, and at most one of them 
> should be specified.  
> If neither of zkHost or solrUrl are specified, the local Zookeeper cluster 
> will be used. ( optional )|
> |solrUrl|Optional|The URL of the external Solr node to be queried ( optional 
> )|
> |from|Required|The join key field name in the external collection ( required 
> )|
> |to|Required|The join key field name in the local collection|
> |v|See Note|The query to be executed against the external Solr collection to 
> retrieve the set of join key values.  
> Note:  The original query can be passed at the end of the string or as the 
> "v" parameter.  
> It's recommended to use query parameter substitution with the "v" parameter 
> to ensure no issues arise with the default query parsers.|
> |routed| |true / false.  If true, the XCJF query will use each shard's hash 
> range to determine the set of join keys to retrieve for that shard.
> This parameter improves the performance of the cross-collection join, but 
> it depends on the local collection being routed by the toField.  If this 
> parameter is not specified, 
> the XCJF query will try to determine the correct value automatically.|
> |ttl| |The length of time that an XCJF query in the cache will be considered 
> valid, in seconds.  Defaults to 3600 (one hour).  
> The XCJF query will not be aware of changes to the remote collection, so 
> if the remote collection is updated, cached XCJF queries may give inaccurate 
> results.  
> After the ttl period has expired, the XCJF query will re-execute the join 
> against the remote collection.|
> |_All others_| |Any normal Solr parameter can also be specified as a local 
> param.|
>  
> Example Solr Config.xml changes:
>  
>  {{<}}{{cache}} {{name}}{{=}}{{"hash_vin"}}
>  {{   }}{{class}}{{=}}{{"solr.LRUCache"}}
>  {{   }}{{size}}{{=}}{{"128"}}
>  {{   }}{{initialSize}}{{=}}{{"0"}}
>  {{   }}{{regenerator}}{{=}}{{"solr.NoOpRegenerator"}}{{/>}}
>   
>  {{<}}{{queryParser}} {{name}}{{=}}{{"xcjf"}} 
> 

[GitHub] [lucene-solr] danmfox opened a new pull request #1514: SOLR-13749: Change cross-collection join query syntax to {!join method=ccjoin ...}

2020-05-13 Thread GitBox


danmfox opened a new pull request #1514:
URL: https://github.com/apache/lucene-solr/pull/1514


   
   
   
   # Description
   
   Updates the cross-collection join query in #976 based on the feedback in 
SOLR-13749.  In that ticket there was a preference to consolidate the 
cross-collection join functionality into the existing join query parser, rather 
than creating a new separate query parser.
   
   # Solution
   
   This PR integrates the cross-collection join query parser into the existing 
join query parser plugin.  The syntax for a cross-collection join changes from 
`{!xcjf ...}` to `{!join method=ccjoin ...}`.  The arguments that could 
previously be set on the XCJF query parser plugin can now be set on the join 
query parser plugin.
   
   # Tests
   
   The XCJFQueryTest class has been updated to use the new query syntax (and 
renamed to CrossCollectionJoinQueryTest).
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [x] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [x] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [x] I have given Solr maintainers 
[access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
 to contribute to my PR branch. (optional but recommended)
   - [x] I have developed this patch against the `master` branch.
   - [x] I have run `ant precommit` and the appropriate test suite.
   - [x] I have added tests for my changes.
   - [ ] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on pull request #1324: LUCENE-9033 Update ReleaseWizard for new website instructions

2020-05-13 Thread GitBox


janhoy commented on pull request #1324:
URL: https://github.com/apache/lucene-solr/pull/1324#issuecomment-628266819


   So I merged it @iverase 
   Feel free to review / test and file new issues for whatever you may find.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9033) Update Release docs an scripts with new site instructions

2020-05-13 Thread Jira


 [ 
https://issues.apache.org/jira/browse/LUCENE-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved LUCENE-9033.
-
Fix Version/s: 8.5.2
   8.6
   master (9.0)
Lucene Fields:   (was: New)
   Resolution: Fixed

I merged the PR, even if I did not have a chance to test every aspect of it. 
We'll catch remaining quirks during the next release...

> Update Release docs an scripts with new site instructions
> -
>
> Key: LUCENE-9033
> URL: https://issues.apache.org/jira/browse/LUCENE-9033
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (9.0), 8.6, 8.5.2
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> *releaseWizard.py:* [PR#1324|https://github.com/apache/lucene-solr/pull/1324] 
> Janhoy has started on this, but will likely not finish before the 8.5 release
> *[ReleaseTODO|https://cwiki.apache.org/confluence/display/LUCENE/ReleaseTodo] 
> page:* I suggest we deprecate this page if folks are happy with 
> releaseWizard, which should encapsulate all steps and details, and can also 
> generate an HTML TODO document per release.
> *publish-solr-ref-guide.sh:* 
> [PR#1326|https://github.com/apache/lucene-solr/pull/1326] This script can be 
> deleted, not in use since we do not publish PDF anymore
> *(/) solr-ref-gudie/src/meta-docs/publish.adoc:*  Done
>  
> There may be other places affected, such as other WIKI pages?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9033) Update Release docs an scripts with new site instructions

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106693#comment-17106693
 ] 

ASF subversion and git services commented on LUCENE-9033:
-

Commit 81dc5c241948cd7680a30f47cd586a64dfd1071f in lucene-solr's branch 
refs/heads/branch_8_5 from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=81dc5c2 ]

LUCENE-9033 Update ReleaseWizard for new website instructions (#1324)

(cherry picked from commit 329e7c7bd5e20853ffca9815bfd916ffd6f4b448)


> Update Release docs an scripts with new site instructions
> -
>
> Key: LUCENE-9033
> URL: https://issues.apache.org/jira/browse/LUCENE-9033
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> *releaseWizard.py:* [PR#1324|https://github.com/apache/lucene-solr/pull/1324] 
> Janhoy has started on this, but will likely not finish before the 8.5 release
> *[ReleaseTODO|https://cwiki.apache.org/confluence/display/LUCENE/ReleaseTodo] 
> page:* I suggest we deprecate this page if folks are happy with 
> releaseWizard, which should encapsulate all steps and details, and can also 
> generate an HTML TODO document per release.
> *publish-solr-ref-guide.sh:* 
> [PR#1326|https://github.com/apache/lucene-solr/pull/1326] This script can be 
> deleted, not in use since we do not publish PDF anymore
> *(/) solr-ref-gudie/src/meta-docs/publish.adoc:*  Done
>  
> There may be other places affected, such as other WIKI pages?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9033) Update Release docs an scripts with new site instructions

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106692#comment-17106692
 ] 

ASF subversion and git services commented on LUCENE-9033:
-

Commit f4d46185a6fcc7559d5ff39f675186bfa933ce6a in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f4d4618 ]

LUCENE-9033 Update ReleaseWizard for new website instructions (#1324)

(cherry picked from commit 329e7c7bd5e20853ffca9815bfd916ffd6f4b448)


> Update Release docs an scripts with new site instructions
> -
>
> Key: LUCENE-9033
> URL: https://issues.apache.org/jira/browse/LUCENE-9033
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> *releaseWizard.py:* [PR#1324|https://github.com/apache/lucene-solr/pull/1324] 
> Janhoy has started on this, but will likely not finish before the 8.5 release
> *[ReleaseTODO|https://cwiki.apache.org/confluence/display/LUCENE/ReleaseTodo] 
> page:* I suggest we deprecate this page if folks are happy with 
> releaseWizard, which should encapsulate all steps and details, and can also 
> generate an HTML TODO document per release.
> *publish-solr-ref-guide.sh:* 
> [PR#1326|https://github.com/apache/lucene-solr/pull/1326] This script can be 
> deleted, not in use since we do not publish PDF anymore
> *(/) solr-ref-gudie/src/meta-docs/publish.adoc:*  Done
>  
> There may be other places affected, such as other WIKI pages?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy merged pull request #1324: LUCENE-9033 Update ReleaseWizard for new website instructions

2020-05-13 Thread GitBox


janhoy merged pull request #1324:
URL: https://github.com/apache/lucene-solr/pull/1324


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9033) Update Release docs an scripts with new site instructions

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106691#comment-17106691
 ] 

ASF subversion and git services commented on LUCENE-9033:
-

Commit 329e7c7bd5e20853ffca9815bfd916ffd6f4b448 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=329e7c7 ]

LUCENE-9033 Update ReleaseWizard for new website instructions (#1324)



> Update Release docs an scripts with new site instructions
> -
>
> Key: LUCENE-9033
> URL: https://issues.apache.org/jira/browse/LUCENE-9033
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *releaseWizard.py:* [PR#1324|https://github.com/apache/lucene-solr/pull/1324] 
> Janhoy has started on this, but will likely not finish before the 8.5 release
> *[ReleaseTODO|https://cwiki.apache.org/confluence/display/LUCENE/ReleaseTodo] 
> page:* I suggest we deprecate this page if folks are happy with 
> releaseWizard, which should encapsulate all steps and details, and can also 
> generate an HTML TODO document per release.
> *publish-solr-ref-guide.sh:* 
> [PR#1326|https://github.com/apache/lucene-solr/pull/1326] This script can be 
> deleted, not in use since we do not publish PDF anymore
> *(/) solr-ref-gudie/src/meta-docs/publish.adoc:*  Done
>  
> There may be other places affected, such as other WIKI pages?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-10814) Solr RuleBasedAuthorization config doesn't work seamlessly with kerberos authentication

2020-05-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-10814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106690#comment-17106690
 ] 

Jan Høydahl commented on SOLR-10814:


[~mdrob], SOLR-12131 is now merged.

> Solr RuleBasedAuthorization config doesn't work seamlessly with kerberos 
> authentication
> ---
>
> Key: SOLR-10814
> URL: https://issues.apache.org/jira/browse/SOLR-10814
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.2
>Reporter: Hrishikesh Gadre
>Priority: Major
> Attachments: SOLR-10814.patch
>
>
> Solr allows configuring roles to control user access to the system. This is 
> accomplished through rule-based permission definitions which are assigned to 
> users.
> The authorization framework in Solr passes the information about the request 
> (to be authorized) using an instance of AuthorizationContext class. Currently 
> the only way to extract authenticated user is via getUserPrincipal() method 
> which returns an instance of java.security.Principal class. The 
> RuleBasedAuthorizationPlugin implementation invokes getName() method on the 
> Principal instance to fetch the list of associated roles.
> https://github.com/apache/lucene-solr/blob/2271e73e763b17f971731f6f69d6ffe46c40b944/solr/core/src/java/org/apache/solr/security/RuleBasedAuthorizationPlugin.java#L156
> In case of basic authentication mechanism, the principal is the userName. 
> Hence it works fine. But in case of kerberos authentication, the user 
> principal also contains the RELM information e.g. instead of foo, it would 
> return f...@example.com. This means if the user changes the authentication 
> mechanism, he would also need to change the user-role mapping in 
> authorization section to use f...@example.com instead of foo. This is not 
> good from usability perspective.   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12131) Authorization plugin support for getting user's roles from the outside

2020-05-13 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-12131.

Fix Version/s: 8.6
   Resolution: Fixed  (was: Later)

> Authorization plugin support for getting user's roles from the outside
> --
>
> Key: SOLR-12131
> URL: https://issues.apache.org/jira/browse/SOLR-12131
> Project: Solr
>  Issue Type: New Feature
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Currently the {{RuleBasedAuthorizationPlugin}} relies on explicitly mapping 
> users to roles. However, when users are authenticated by an external Identity 
> service (e.g. JWT as implemented in SOLR-12121), that external service keeps 
> track of the user's roles, and will pass that as a "claim" in the token (JWT).
> In order for Solr to be able to Authorise requests based on those roles, the 
> Authorization plugin should be able to accept (verified) roles from the 
> request instead of explicit mapping.
> Suggested approach is to create a new interface {{VerifiedUserRoles}} and a 
> {{PrincipalWithUserRoles}} which implements the interface. The Authorization 
> plugin can then pull the roles from request. By piggy-backing on the 
> Principal, we have a seamless way to transfer extra external information, and 
> there is also a natural relationship:
> {code:java}
> User Authentication -> Role validation -> Creating a Principal{code}
> I plan to add the interface, the custom Principal class and restructure 
> {{RuleBasedAuthorizationPlugin}} in an abstract base class and two 
> implementations: {{RuleBasedAuthorizationPlugin}} (as today) and a new 
> {{ExternalRoleRuleBasedAuthorizationPlugin.}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12131) Authorization plugin support for getting user's roles from the outside

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106686#comment-17106686
 ] 

ASF subversion and git services commented on SOLR-12131:


Commit d8877cf7af73ee6a82cba952935ff6bc07aef65f in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d8877cf ]

SOLR-12131: ExternalRoleRuleBasedAuthorizationPlugin (#341)

(cherry picked from commit 1e449e3d048ad9dca3de4630920a9c46d57eb83f)


> Authorization plugin support for getting user's roles from the outside
> --
>
> Key: SOLR-12131
> URL: https://issues.apache.org/jira/browse/SOLR-12131
> Project: Solr
>  Issue Type: New Feature
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> Currently the {{RuleBasedAuthorizationPlugin}} relies on explicitly mapping 
> users to roles. However, when users are authenticated by an external Identity 
> service (e.g. JWT as implemented in SOLR-12121), that external service keeps 
> track of the user's roles, and will pass that as a "claim" in the token (JWT).
> In order for Solr to be able to Authorise requests based on those roles, the 
> Authorization plugin should be able to accept (verified) roles from the 
> request instead of explicit mapping.
> Suggested approach is to create a new interface {{VerifiedUserRoles}} and a 
> {{PrincipalWithUserRoles}} which implements the interface. The Authorization 
> plugin can then pull the roles from request. By piggy-backing on the 
> Principal, we have a seamless way to transfer extra external information, and 
> there is also a natural relationship:
> {code:java}
> User Authentication -> Role validation -> Creating a Principal{code}
> I plan to add the interface, the custom Principal class and restructure 
> {{RuleBasedAuthorizationPlugin}} in an abstract base class and two 
> implementations: {{RuleBasedAuthorizationPlugin}} (as today) and a new 
> {{ExternalRoleRuleBasedAuthorizationPlugin.}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set before the tuples hit the 
network. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl="a,b,c", 
 sort="a desc, b desc", 
 expr=rollup(input(), over="a,b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl="a,b,c", 
 sort="a desc, b desc", 
 expr=rollup(input(), over="a,b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler in one 
> replica in each shard of a collection and pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set before the tuples hit the 
> network. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
> expr=rollup(input(), over="a,b", sum(c))) {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl="a,b,c", 
>  sort="a desc, b desc", 
>  expr=rollup(input(), over="a,b", sum(c))),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality by 
> pushing down the first level of aggregation into the /export handler.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: 

[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr=rollup(input(), over="a,b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr=rollup(input(), over="a, b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler in one 
> replica in each shard of a collection and pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
> expr=rollup(input(), over="a,b", sum(c))) {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl=a,b,c, sort="a desc, b desc", 
>  expr=rollup(input(), over="a,b", sum(c))),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality by 
> pushing down the first level of aggregation into the /export handler.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl="a,b,c", 
 sort="a desc, b desc", 
 expr=rollup(input(), over="a,b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr=rollup(input(), over="a,b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler in one 
> replica in each shard of a collection and pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
> expr=rollup(input(), over="a,b", sum(c))) {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl="a,b,c", 
>  sort="a desc, b desc", 
>  expr=rollup(input(), over="a,b", sum(c))),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality by 
> pushing down the first level of aggregation into the /export handler.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For 

[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr=rollup(input(), over="a, b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
exp=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr=rollup(input(), over="a, b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler in one 
> replica in each shard of a collection and pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
> expr=rollup(input(), over="a,b", sum(c))) {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl=a,b,c, sort="a desc, b desc", 
>  expr=rollup(input(), over="a, b", sum(c))),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality by 
> pushing down the first level of aggregation into the /export handler.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
exp=rollup(input(), over="a,b", sum(c))) {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr=rollup(input(), over="a, b", sum(c))),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr="rollup(input(), over="a,b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler in one 
> replica in each shard of a collection and pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
> exp=rollup(input(), over="a,b", sum(c))) {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl=a,b,c, sort="a desc, b desc", 
>  expr=rollup(input(), over="a, b", sum(c))),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality by 
> pushing down the first level of aggregation into the /export handler.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr="rollup(input(), over="a,b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality by 
pushing down the first level of aggregation into the /export handler.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr="rollup(input(), over="a,b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler in one 
> replica in each shard of a collection and pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
> expr="rollup(input(), over="a,b", sum(c))") {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl=a,b,c, sort="a desc, b desc", 
>  expr="rollup(input(), over="a, b", sum(c))"),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality by 
> pushing down the first level of aggregation into the /export handler.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr="rollup(input(), over="a,b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler in one 
> replica in each shard of a collection and pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
> expr="rollup(input(), over="a,b", sum(c))") {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl=a,b,c, sort="a desc, b desc", 
>  expr="rollup(input(), over="a, b", sum(c))"),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler in one 
> replica in each shard of a collection and pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl="a,b,c", sort="a desc, b desc", 
> expr="rollup(input(), over="a, b", sum(c))") {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl=a,b,c, sort="a desc, b desc", 
>  expr="rollup(input(), over="a, b", sum(c))"),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in one replica 
in each shard of a collection and pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in a replica 
for each in a collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler in one 
> replica in each shard of a collection and pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
> expr="rollup(input(), over="a, b", sum(c))") {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl=a,b,c, sort="a desc, b desc", 
>  expr="rollup(input(), over="a, b", sum(c))"),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy merged pull request #341: SOLR-12131: ExternalRoleRuleBasedAuthorizationPlugin

2020-05-13 Thread GitBox


janhoy merged pull request #341:
URL: https://github.com/apache/lucene-solr/pull/341


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12131) Authorization plugin support for getting user's roles from the outside

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106681#comment-17106681
 ] 

ASF subversion and git services commented on SOLR-12131:


Commit 1e449e3d048ad9dca3de4630920a9c46d57eb83f in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1e449e3 ]

SOLR-12131: ExternalRoleRuleBasedAuthorizationPlugin (#341)



> Authorization plugin support for getting user's roles from the outside
> --
>
> Key: SOLR-12131
> URL: https://issues.apache.org/jira/browse/SOLR-12131
> Project: Solr
>  Issue Type: New Feature
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Currently the {{RuleBasedAuthorizationPlugin}} relies on explicitly mapping 
> users to roles. However, when users are authenticated by an external Identity 
> service (e.g. JWT as implemented in SOLR-12121), that external service keeps 
> track of the user's roles, and will pass that as a "claim" in the token (JWT).
> In order for Solr to be able to Authorise requests based on those roles, the 
> Authorization plugin should be able to accept (verified) roles from the 
> request instead of explicit mapping.
> Suggested approach is to create a new interface {{VerifiedUserRoles}} and a 
> {{PrincipalWithUserRoles}} which implements the interface. The Authorization 
> plugin can then pull the roles from request. By piggy-backing on the 
> Principal, we have a seamless way to transfer extra external information, and 
> there is also a natural relationship:
> {code:java}
> User Authentication -> Role validation -> Creating a Principal{code}
> I plan to add the interface, the custom Principal class and restructure 
> {{RuleBasedAuthorizationPlugin}} in an abstract base class and two 
> implementations: {{RuleBasedAuthorizationPlugin}} (as today) and a new 
> {{ExternalRoleRuleBasedAuthorizationPlugin.}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-14481:
-

Assignee: Joel Bernstein

> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that described in 
> SOLR-14470. The idea is for drill to contact the /export handler in a replica 
> for each in a collection pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
> expr="rollup(input(), over="a, b", sum(c))") {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl=a,b,c, sort="a desc, b desc", 
>  expr="rollup(input(), over="a, b", sum(c))"),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that is described in 
SOLR-14470. The idea is for drill to contact the /export handler in a replica 
for each in a collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that described in SOLR-14470. 
The idea is for drill to contact the /export handler in a replica for each in a 
collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that is described in 
> SOLR-14470. The idea is for drill to contact the /export handler in a replica 
> for each in a collection pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
> expr="rollup(input(), over="a, b", sum(c))") {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl=a,b,c, sort="a desc, b desc", 
>  expr="rollup(input(), over="a, b", sum(c))"),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that described in SOLR-14470. 
The idea is for drill to contact the /export handler in a replica for each in a 
collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 This provides fast aggregation over fields with infinite cardinality.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that described in SOLR-14470. 
The idea is for drill to contact the /export handler in a replica for each in a 
collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 

This provide fast aggregation over fields with infinite cardinality.

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that described in 
> SOLR-14470. The idea is for drill to contact the /export handler in a replica 
> for each in a collection pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
> expr="rollup(input(), over="a, b", sum(c))") {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl=a,b,c, sort="a desc, b desc", 
>  expr="rollup(input(), over="a, b", sum(c))"),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  This provides fast aggregation over fields with infinite cardinality.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that described in SOLR-14470. 
The idea is for drill to contact the /export handler in a replica for each in a 
collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 

This provide fast aggregation over fields with infinite cardinality.

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that described in SOLR-14470. 
The idea is for drill to contact the /export handler in a replica for each in a 
collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that described in 
> SOLR-14470. The idea is for drill to contact the /export handler in a replica 
> for each in a collection pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
> expr="rollup(input(), over="a, b", sum(c))") {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl=a,b,c, sort="a desc, b desc", 
>  expr="rollup(input(), over="a, b", sum(c))"),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  
> This provide fast aggregation over fields with infinite cardinality.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that described in SOLR-14470. 
The idea is for drill to contact the /export handler in a replica for each in a 
collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that described in SOLR-14470. 
The idea is for drill to contact the /export handler in a replica for each in a 
collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 

 

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that described in 
> SOLR-14470. The idea is for drill to contact the /export handler in a replica 
> for each in a collection pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
> expr="rollup(input(), over="a, b", sum(c))") {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
> expr="rollup(input(), over="a, b", sum(c))"),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that described in SOLR-14470. 
The idea is for drill to contact the /export handler in a replica for each in a 
collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, 
 q="*:*", 
 fl=a,b,c, sort="a desc, b desc", 
 expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that described in SOLR-14470. 
The idea is for drill to contact the /export handler in a replica for each in a 
collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 In order to finish the aggregation other expressions can be used:
{code:java}
rollup(
select(
   drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))"),
   a,
   b,
   sum(c) as sums),
over="a, b",
sum(sums))
   
 {code}
 

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that described in 
> SOLR-14470. The idea is for drill to contact the /export handler in a replica 
> for each in a collection pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
> expr="rollup(input(), over="a, b", sum(c))") {code}
>  In order to finish the aggregation other expressions can be used:
> {code:java}
> rollup(
> select(
>drill(collection1, 
>  q="*:*", 
>  fl=a,b,c, sort="a desc, b desc", 
>  expr="rollup(input(), over="a, b", sum(c))"),
>a,
>b,
>sum(c) as sums),
> over="a, b",
> sum(sums))
>
>  {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that described in SOLR-14470. 
The idea is for drill to contact the /export handler in a replica for each in a 
collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 

 

 

 

  was:
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that described in SOLR-14470. 
The idea is for drill to contact the /export handler in a replica for each in a 
collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 

 

 


> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that described in 
> SOLR-14470. The idea is for drill to contact the /export handler in a replica 
> for each in a collection pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
> expr="rollup(input(), over="a, b", sum(c))") {code}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14481:
--
Description: 
This ticket will add the *drill* Streaming Expression. The drill Streaming 
Expression is a wrapper around the functionality that described in SOLR-14470. 
The idea is for drill to contact the /export handler in a replica for each in a 
collection pass four parameters:
 * q: query
 * fl: field list
 * sort: sort spec
 * expr: Streaming Expressions.

The export handler will pass the result set through the streaming expression 
performing an aggregation on the sorted result set and return the aggregated 
tuples. The drill expression will simply maintain the sort order of the tuples 
and emit them so that a wrapper expression can perform operations on the sorted 
aggregate tuples.

Sample syntax:
{code:java}
drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
expr="rollup(input(), over="a, b", sum(c))") {code}
 

 

 

> Add drill Streaming Expression
> --
>
> Key: SOLR-14481
> URL: https://issues.apache.org/jira/browse/SOLR-14481
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> This ticket will add the *drill* Streaming Expression. The drill Streaming 
> Expression is a wrapper around the functionality that described in 
> SOLR-14470. The idea is for drill to contact the /export handler in a replica 
> for each in a collection pass four parameters:
>  * q: query
>  * fl: field list
>  * sort: sort spec
>  * expr: Streaming Expressions.
> The export handler will pass the result set through the streaming expression 
> performing an aggregation on the sorted result set and return the aggregated 
> tuples. The drill expression will simply maintain the sort order of the 
> tuples and emit them so that a wrapper expression can perform operations on 
> the sorted aggregate tuples.
> Sample syntax:
> {code:java}
> drill(collection1, q="*:*", fl=a,b,c, sort="a desc, b desc", 
> expr="rollup(input(), over="a, b", sum(c))") {code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14481) Add drill Streaming Expression

2020-05-13 Thread Joel Bernstein (Jira)
Joel Bernstein created SOLR-14481:
-

 Summary: Add drill Streaming Expression
 Key: SOLR-14481
 URL: https://issues.apache.org/jira/browse/SOLR-14481
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: streaming expressions
Reporter: Joel Bernstein






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob commented on pull request #341: SOLR-12131: ExternalRoleRuleBasedAuthorizationPlugin

2020-05-13 Thread GitBox


madrob commented on pull request #341:
URL: https://github.com/apache/lucene-solr/pull/341#issuecomment-628226952


   @janhoy Do you want to push this? I'm starting to work on SOLR-10814 and 
would like to be able to build on top of the great work you've already done 
here!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14423) static caches in StreamHandler ought to move to CoreContainer lifecycle

2020-05-13 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki resolved SOLR-14423.
-
Fix Version/s: 8.6
   Resolution: Fixed

The final fix initializes {{CoreContainer.solrClientCache}} instance and uses 
it in most other places.

One odd duck was the {{CalciteSolrDriver}}, which is configured statically by 
the JDBC framework. Here the CoreContainer explicitly finds that singleton 
instance and sets it to use the common SolrClientCache.

Additionally, other static members of {{StreamHandler}} are now kept in an 
{{ObjectCache}} instance, which is again initialized in 
{{CoreContainer.objectCache}}.

These changes should improve performance of /sql handler and increase the 
isolation of CoreContainers.

Thank you David, Christine, and especially Joel for some insights into the 
streaming's inner workings and for additional testing.

> static caches in StreamHandler ought to move to CoreContainer lifecycle
> ---
>
> Key: SOLR-14423
> URL: https://issues.apache.org/jira/browse/SOLR-14423
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: David Smiley
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> StreamHandler (at "/stream") has several statically declared caches.  I think 
> this is problematic, such as in testing wherein multiple nodes could be in 
> the same JVM.  One of them is more serious -- SolrClientCache which is 
> closed/cleared via a SolrCore close hook.  That's bad for performance but 
> also dangerous since another core might want to use one of these clients!
> CC [~jbernste]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14423) static caches in StreamHandler ought to move to CoreContainer lifecycle

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106629#comment-17106629
 ] 

ASF subversion and git services commented on SOLR-14423:


Commit 3abd7585689db17ced6084cc360a87769d73d481 in lucene-solr's branch 
refs/heads/branch_8x from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3abd758 ]

SOLR-14423: Move static SolrClientCache from StreamHandler to CoreContainer for 
wider reuse and better life-cycle management.


> static caches in StreamHandler ought to move to CoreContainer lifecycle
> ---
>
> Key: SOLR-14423
> URL: https://issues.apache.org/jira/browse/SOLR-14423
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: David Smiley
>Assignee: Andrzej Bialecki
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> StreamHandler (at "/stream") has several statically declared caches.  I think 
> this is problematic, such as in testing wherein multiple nodes could be in 
> the same JVM.  One of them is more serious -- SolrClientCache which is 
> closed/cleared via a SolrCore close hook.  That's bad for performance but 
> also dangerous since another core might want to use one of these clients!
> CC [~jbernste]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14423) static caches in StreamHandler ought to move to CoreContainer lifecycle

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106630#comment-17106630
 ] 

ASF subversion and git services commented on SOLR-14423:


Commit d1aaa5ed34e5e6224ce57150cb143dd249bfef90 in lucene-solr's branch 
refs/heads/branch_8x from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d1aaa5e ]

SOLR-14423: Additional fixes for object caching and incorrect test assumptions.


> static caches in StreamHandler ought to move to CoreContainer lifecycle
> ---
>
> Key: SOLR-14423
> URL: https://issues.apache.org/jira/browse/SOLR-14423
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: David Smiley
>Assignee: Andrzej Bialecki
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> StreamHandler (at "/stream") has several statically declared caches.  I think 
> this is problematic, such as in testing wherein multiple nodes could be in 
> the same JVM.  One of them is more serious -- SolrClientCache which is 
> closed/cleared via a SolrCore close hook.  That's bad for performance but 
> also dangerous since another core might want to use one of these clients!
> CC [~jbernste]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14456) Compressed requests fail in SolrCloud when the request is routed internally by the serving solr node

2020-05-13 Thread Houston Putman (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman resolved SOLR-14456.
---
Fix Version/s: 8.6
   master (9.0)
   Resolution: Fixed

> Compressed requests fail in SolrCloud when the request is routed internally 
> by the serving solr node
> 
>
> Key: SOLR-14456
> URL: https://issues.apache.org/jira/browse/SOLR-14456
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.7.2
> Environment: Solr version: 7.7.2
> Solr cloud enabled
> Cluster topology: 6 nodes, 1 single collection, 10 shards and 3 replicas. 1 
> HTTP LB using Round Robin over all nodes
> All cluster nodes have gzip enabled for all paths, all HTTP verbs and all 
> MIME types.
> Solr client: HttpSolrClient targeting the HTTP LB
> h3.  
>Reporter: Samuel García Martínez
>Assignee: Houston Putman
>Priority: Major
> Fix For: master (9.0), 8.6
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> h3. Solr cluster setup
>  * Solr version: 7.7.2
>  * Solr cloud enabled
>  * Cluster topology: 6 nodes, 1 single collection, 10 shards and 3 replicas. 
> 1 HTTP LB using Round Robin over all nodes
>  * All cluster nodes have gzip enabled for all paths, all HTTP verbs and all 
> MIME types.
>  * Solr client: HttpSolrClient targeting the HTTP LB
> h3. Problem description
> When the Solr node that receives the request has to forward it
> to a Solr Node that can actually perform the query, the response headers are 
> added incorrectly to the response, causing any HTTP client to fail, whether 
> it's a SolrClient or a basic HTTP client implementation with any other SDK.
> To simplify the case, let's try to start from the following repro scenario:
>  * Start one node with cloud mode and port 8983
>  * Create one single collection (1 shard, 1 replica)
>  * Start another node with port 8984 and the previusly started zk (-z 
> localhost:9983)
>  * Start a java application and query the cluster using the node on port 8984 
> (the one that doesn't host the collection)
> So, then something like this happens:
>  * The application queries node:8984 with compression enabled 
> ("Accept-Encoding: gzip")
> and wt=javabin
>  * Node:8984 can't perform the query and creates a http request behind the 
> scenes to node:8983
>  * Node:8983 returns a gzipped response with "Content-Encoding: gzip" and 
> "Content-Type:
> application/octet-stream"
> Node:8984 adds the "Content-Encoding: gzip" header as character stream to the 
> response
> (it should be forwarded as "Content-Encoding" header, not character encoding)
>  * HttpSolrClient receives a "Content-Type: 
> application/octet-stream;charset=gzip", causing
> an exception.
>  * HttpSolrClient tries to quietly close the connection, but since the stream 
> is broken,
> the Utils.consumeFully fails to actually consume the entity (it throws 
> another exception in
> GzipDecompressingEntity#getContent() with "not in GZIP format")
> The exception thrown by HttpSolrClient is:
> {code:java}
> java.nio.charset.UnsupportedCharsetException: gzip
>  at java.nio.charset.Charset.forName(Charset.java:531)
>  at org.apache.http.entity.ContentType.create(ContentType.java:271)
>  at org.apache.http.entity.ContentType.create(ContentType.java:261)
>  at org.apache.http.entity.ContentType.parse(ContentType.java:319)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:591)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1015)
>  at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1031)
>  at 
> org.apache.solr.client.solrj.SolrClient$$FastClassBySpringCGLIB$$7fcf36a0.invoke()
>  at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218){code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14456) Compressed requests fail in SolrCloud when the request is routed internally by the serving solr node

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106613#comment-17106613
 ] 

ASF subversion and git services commented on SOLR-14456:


Commit 7b32e68d054adb511999a97da8bcc2ad5a8a2428 in lucene-solr's branch 
refs/heads/branch_8x from Samuel García Martínez
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7b32e68 ]

SOLR-14456: Fix Content-Type header forwarding on compressed requests (#1480)

Co-authored-by: Samuel García Martínez 


> Compressed requests fail in SolrCloud when the request is routed internally 
> by the serving solr node
> 
>
> Key: SOLR-14456
> URL: https://issues.apache.org/jira/browse/SOLR-14456
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.7.2
> Environment: Solr version: 7.7.2
> Solr cloud enabled
> Cluster topology: 6 nodes, 1 single collection, 10 shards and 3 replicas. 1 
> HTTP LB using Round Robin over all nodes
> All cluster nodes have gzip enabled for all paths, all HTTP verbs and all 
> MIME types.
> Solr client: HttpSolrClient targeting the HTTP LB
> h3.  
>Reporter: Samuel García Martínez
>Assignee: Houston Putman
>Priority: Major
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> h3. Solr cluster setup
>  * Solr version: 7.7.2
>  * Solr cloud enabled
>  * Cluster topology: 6 nodes, 1 single collection, 10 shards and 3 replicas. 
> 1 HTTP LB using Round Robin over all nodes
>  * All cluster nodes have gzip enabled for all paths, all HTTP verbs and all 
> MIME types.
>  * Solr client: HttpSolrClient targeting the HTTP LB
> h3. Problem description
> When the Solr node that receives the request has to forward it
> to a Solr Node that can actually perform the query, the response headers are 
> added incorrectly to the response, causing any HTTP client to fail, whether 
> it's a SolrClient or a basic HTTP client implementation with any other SDK.
> To simplify the case, let's try to start from the following repro scenario:
>  * Start one node with cloud mode and port 8983
>  * Create one single collection (1 shard, 1 replica)
>  * Start another node with port 8984 and the previusly started zk (-z 
> localhost:9983)
>  * Start a java application and query the cluster using the node on port 8984 
> (the one that doesn't host the collection)
> So, then something like this happens:
>  * The application queries node:8984 with compression enabled 
> ("Accept-Encoding: gzip")
> and wt=javabin
>  * Node:8984 can't perform the query and creates a http request behind the 
> scenes to node:8983
>  * Node:8983 returns a gzipped response with "Content-Encoding: gzip" and 
> "Content-Type:
> application/octet-stream"
> Node:8984 adds the "Content-Encoding: gzip" header as character stream to the 
> response
> (it should be forwarded as "Content-Encoding" header, not character encoding)
>  * HttpSolrClient receives a "Content-Type: 
> application/octet-stream;charset=gzip", causing
> an exception.
>  * HttpSolrClient tries to quietly close the connection, but since the stream 
> is broken,
> the Utils.consumeFully fails to actually consume the entity (it throws 
> another exception in
> GzipDecompressingEntity#getContent() with "not in GZIP format")
> The exception thrown by HttpSolrClient is:
> {code:java}
> java.nio.charset.UnsupportedCharsetException: gzip
>  at java.nio.charset.Charset.forName(Charset.java:531)
>  at org.apache.http.entity.ContentType.create(ContentType.java:271)
>  at org.apache.http.entity.ContentType.create(ContentType.java:261)
>  at org.apache.http.entity.ContentType.parse(ContentType.java:319)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:591)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1015)
>  at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:1031)
>  at 
> org.apache.solr.client.solrj.SolrClient$$FastClassBySpringCGLIB$$7fcf36a0.invoke()
>  at 
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218){code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14426) forbidden api error during precommit DateMathFunction

2020-05-13 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106605#comment-17106605
 ] 

Erick Erickson commented on SOLR-14426:
---

NP, besides backporting is a waste until there's agreement on the version in 
master...

> forbidden api error during precommit DateMathFunction
> -
>
> Key: SOLR-14426
> URL: https://issues.apache.org/jira/browse/SOLR-14426
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When running `./gradlew precommit` I'll occasionally see
> {code}
> * What went wrong:
> Execution failed for task ':solr:contrib:analytics:forbiddenApisMain'.
> > de.thetaphi.forbiddenapis.ForbiddenApiException: Check for forbidden API 
> > calls failed while scanning class 
> > 'org.apache.solr.analytics.function.mapping.DateMathFunction' 
> > (DateMathFunction.java): java.lang.ClassNotFoundException: 
> > org.apache.solr.analytics.function.mapping.DateMathValueFunction (while 
> > looking up details about referenced class 
> > 'org.apache.solr.analytics.function.mapping.DateMathValueFunction')
> {code}
> `./gradlew clean` fixes this, but I don't understand what or why this 
> happens. Feels like a gradle issue?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106597#comment-17106597
 ] 

Jan Høydahl commented on SOLR-14105:


{quote}[~sbordet] : [~janhoy] as I said, the issues are fixed in Jetty 9.4.25+
{quote}
[~ttaranov], are you able to test with 8.6-SNAPSHOT? Can be downloaded from 
Jenkins: 
https://builds.apache.org/view/L/view/Lucene/job/Solr-Artifacts-8.x/lastSuccessfulBuild/artifact/solr/package/

> Http2SolrClient SSL not working in branch_8x
> 
>
> Key: SOLR-14105
> URL: https://issues.apache.org/jira/browse/SOLR-14105
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-14105.patch
>
>
> In branch_8x we upgraded to Jetty 9.4.24. This causes the following 
> exceptions when attempting to start server with SSL:
> {noformat}
> 2019-12-17 14:46:16.646 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: Error instantiating 
> shardHandlerFactory class [HttpShardHandlerFactory]: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:633)
> ...
> Caused by: java.lang.RuntimeException: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:224)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:154)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
>   at 
> org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:321)
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:51)
>   ... 50 more
> Caused by: java.lang.UnsupportedOperationException: X509ExtendedKeyManager 
> only supported on Server
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.newSniX509ExtendedKeyManager(SslContextFactory.java:1273)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.getKeyManagers(SslContextFactory.java:1255)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:374)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14471) base replica selection strategy not applied to "last place" shards.preference matches

2020-05-13 Thread Michael Gibney (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106581#comment-17106581
 ] 

Michael Gibney commented on SOLR-14471:
---

A couple of reports of this bug in the wild have recently surfaced on the 
solr-user list; basically, for some replica configurations and 
shards.preference settings, routing (e.g., load balancing) of internal requests 
is not working properly. [~tflobbe], would you be able to take a look, when you 
have a chance?

> base replica selection strategy not applied to "last place" shards.preference 
> matches
> -
>
> Key: SOLR-14471
> URL: https://issues.apache.org/jira/browse/SOLR-14471
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0), 8.3
>Reporter: Michael Gibney
>Priority: Minor
>
> When {{shards.preferences}} is specified, all inherently equivalent groups of 
> replicas should fall back to being sorted by the {{replica.base}} strategy 
> (either random or some variant of "stable"). This currently works for every 
> group of "equivalent" replicas, with the exception of "last place" matches.
> This is easy to overlook, because usually it's the "first place" matches that 
> will be selected for the purpose of actually executing distributed requests; 
> but it's still a bug, and is especially problematic when "last place matches" 
> == "first place matches" – e.g. when {{shards.preference}} specified matches 
> _all_ available replicas.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13132) Improve JSON "terms" facet performance when sorted by relatedness

2020-05-13 Thread Michael Gibney (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106560#comment-17106560
 ] 

Michael Gibney commented on SOLR-13132:
---

I just pushed several commits (each with "SOLR-14467" in the commit message). 
The commits are fairly digestible I think, but although they illustrate (and I 
believe fix) the problem, they are likely not appropriate for use as-is; I 
tried to mark with nocommit messages accordingly.

I did the work on this branch rather than at SOLR-14467 because the testing 
built out for this issue (SOLR-13132) was helpful, and it was more generally 
helpful to compare consistency across sweep/non-sweep implementations to inform 
how _best_ to go about addressing the issues raised by {{allBuckets}} in even a 
strictly non-sweep context. That said, I tried to separate things out to make 
it clear what part of the fix would likely be applicable to the current master 
branch (I think the relevant commit to "backport" to master would be 
22446b126de3a6d66c8a9270e1d583d89b07865c).

I think that the use of {{RelatednessAgg}} in {{allBuckets}} may be 
fundamentally incompatible with deferred ({{otherAccs}}) collection. The 
approach I took to address this is to prevent {{RelatednessAgg}} from being 
deferred when {{allBuckets=true}}. Another possibility, not entirely thought 
through, would be to somehow make {{RelatednessAgg}} aware of when it's being 
used in a deferred (otherAccs) context, and cumulatively track allBuckets data 
in a way that is not reset by calls to SKGSlotAcc.reset(). I kind of don't see 
how that would work though, and I think my confusion at this point centers on 
how any single {{otherAcc}} with {{numSlots==1}} can ever cumulatively track 
any stats for allBuckets. I'm probably missing something here, but in any event 
I'm hoping that these commits will prove to be a good starting point for 
discussion!

> Improve JSON "terms" facet performance when sorted by relatedness 
> --
>
> Key: SOLR-13132
> URL: https://issues.apache.org/jira/browse/SOLR-13132
> Project: Solr
>  Issue Type: Improvement
>  Components: Facet Module
>Affects Versions: 7.4, master (9.0)
>Reporter: Michael Gibney
>Priority: Major
> Attachments: SOLR-13132-with-cache-01.patch, 
> SOLR-13132-with-cache.patch, SOLR-13132.patch, SOLR-13132_testSweep.patch
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> When sorting buckets by {{relatedness}}, JSON "terms" facet must calculate 
> {{relatedness}} for every term. 
> The current implementation uses a standard uninverted approach (either 
> {{docValues}} or {{UnInvertedField}}) to get facet counts over the domain 
> base docSet, and then uses that initial pass as a pre-filter for a 
> second-pass, inverted approach of fetching docSets for each relevant term 
> (i.e., {{count > minCount}}?) and calculating intersection size of those sets 
> with the domain base docSet.
> Over high-cardinality fields, the overhead of per-term docSet creation and 
> set intersection operations increases request latency to the point where 
> relatedness sort may not be usable in practice (for my use case, even after 
> applying the patch for SOLR-13108, for a field with ~220k unique terms per 
> core, QTime for high-cardinality domain docSets were, e.g.: cardinality 
> 1816684=9000ms, cardinality 5032902=18000ms).
> The attached patch brings the above example QTimes down to a manageable 
> ~300ms and ~250ms respectively. The approach calculates uninverted facet 
> counts over domain base, foreground, and background docSets in parallel in a 
> single pass. This allows us to take advantage of the efficiencies built into 
> the standard uninverted {{FacetFieldProcessorByArray[DV|UIF]}}), and avoids 
> the per-term docSet creation and set intersection overhead.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-13 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106547#comment-17106547
 ] 

Mike Drob commented on LUCENE-9365:
---

bq. Maybe that's the bug and it should only do it when the prefix length is 
strictly greater than the term length?
I think it's supposed to be an optimization, so maybe it's safe to drop the 
SingleTermEnum entirely and the whole problem goes away by itself?

> Fuzzy query has a false negative when prefix length == search term length 
> --
>
> Key: LUCENE-9365
> URL: https://issues.apache.org/jira/browse/LUCENE-9365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Major
>
> When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
> with an edit distance of 1 and prefix length of 3.
> In FuzzyQuery an automaton is created for the "suffix" part of the search 
> string which in this case is an empty string.
> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form :
> {code:java}
> searchString + "?" 
> {code}
> .. where there's an appropriate number of ? characters according to the edit 
> distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-13 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106544#comment-17106544
 ] 

Mike Drob commented on LUCENE-9365:
---

Do we correctly handle the case for maxEdits=2 and prefix=length-1? That seems 
like it would be the worst combination of short terms for fuzzy query and long 
prefixes which we see here.

> Fuzzy query has a false negative when prefix length == search term length 
> --
>
> Key: LUCENE-9365
> URL: https://issues.apache.org/jira/browse/LUCENE-9365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Major
>
> When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
> with an edit distance of 1 and prefix length of 3.
> In FuzzyQuery an automaton is created for the "suffix" part of the search 
> string which in this case is an empty string.
> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form :
> {code:java}
> searchString + "?" 
> {code}
> .. where there's an appropriate number of ? characters according to the edit 
> distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14426) forbidden api error during precommit DateMathFunction

2020-05-13 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106536#comment-17106536
 ] 

Mike Drob commented on SOLR-14426:
--

I didn't back port yet because there wasn't consensus yet that this was the 
right approach. I'll defer to you on what we should do Erick, you're thinking 
about this much more than I am at the moment and I trust that your head is in 
the right space.

> forbidden api error during precommit DateMathFunction
> -
>
> Key: SOLR-14426
> URL: https://issues.apache.org/jira/browse/SOLR-14426
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> When running `./gradlew precommit` I'll occasionally see
> {code}
> * What went wrong:
> Execution failed for task ':solr:contrib:analytics:forbiddenApisMain'.
> > de.thetaphi.forbiddenapis.ForbiddenApiException: Check for forbidden API 
> > calls failed while scanning class 
> > 'org.apache.solr.analytics.function.mapping.DateMathFunction' 
> > (DateMathFunction.java): java.lang.ClassNotFoundException: 
> > org.apache.solr.analytics.function.mapping.DateMathValueFunction (while 
> > looking up details about referenced class 
> > 'org.apache.solr.analytics.function.mapping.DateMathValueFunction')
> {code}
> `./gradlew clean` fixes this, but I don't understand what or why this 
> happens. Feels like a gradle issue?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14478:
--
Attachment: SOLR-14478.patch

> Allow the diff Stream Evaluator to operate on the rows of a matrix
> --
>
> Key: SOLR-14478
> URL: https://issues.apache.org/jira/browse/SOLR-14478
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-14478.patch
>
>
> Currently the *diff* function performs *serial differencing* on a numeric 
> vector. This ticket will allow the diff function to perform serial 
> differencing on all the rows of a *matrix*. This will make it easy to perform 
> *correlations* on a matrix of *differenced time series vectors* using math 
> expressions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14423) static caches in StreamHandler ought to move to CoreContainer lifecycle

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106511#comment-17106511
 ] 

ASF subversion and git services commented on SOLR-14423:


Commit dd4fa8f2f87d1dc7a10d72febc9241520b6294d6 in lucene-solr's branch 
refs/heads/master from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=dd4fa8f ]

SOLR-14423: Additional fixes for object caching and incorrect test assumptions.


> static caches in StreamHandler ought to move to CoreContainer lifecycle
> ---
>
> Key: SOLR-14423
> URL: https://issues.apache.org/jira/browse/SOLR-14423
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: David Smiley
>Assignee: Andrzej Bialecki
>Priority: Major
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> StreamHandler (at "/stream") has several statically declared caches.  I think 
> this is problematic, such as in testing wherein multiple nodes could be in 
> the same JVM.  One of them is more serious -- SolrClientCache which is 
> closed/cleared via a SolrCore close hook.  That's bad for performance but 
> also dangerous since another core might want to use one of these clients!
> CC [~jbernste]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-13 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106463#comment-17106463
 ] 

David Smiley commented on SOLR-11934:
-

I don't agree with that change; it's redundant with what MDC is already 
providing and is there for.

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-13 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106447#comment-17106447
 ] 

Erick Erickson commented on SOLR-11934:
---

Done.

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106445#comment-17106445
 ] 

ASF subversion and git services commented on SOLR-11934:


Commit d992d0a059e1a5ff52b27c4c20af90a386e2727c in lucene-solr's branch 
refs/heads/branch_8x from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d992d0a ]

SOLR-11934: Visit Solr logging, it's too noisy. (added collection to log 
messages 'Registered new searcher...'

(cherry picked from commit e4dc9e9401ed077101672b19171304e59bb7b4f6)


> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-13 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106443#comment-17106443
 ] 

ASF subversion and git services commented on SOLR-11934:


Commit e4dc9e9401ed077101672b19171304e59bb7b4f6 in lucene-solr's branch 
refs/heads/master from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e4dc9e9 ]

SOLR-11934: Visit Solr logging, it's too noisy. (added collection to log 
messages 'Registered new searcher...'


> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-13 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106423#comment-17106423
 ] 

Mikhail Khludnev commented on SOLR-14419:
-

No opinions, so far. Who needs more time to look at or it's fine to push it 
into? I have more little tweaks to DSL for this release.  

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-10778) Address precommit WARNINGS

2020-05-13 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-10778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106388#comment-17106388
 ] 

Erick Erickson commented on SOLR-10778:
---

[~atris] [~gus]  Many thanks for volunteering. I assigned each of you a 
directory, but there's no reason at all you need to be obligated to do that 
particular one. Please feel absolutely free to pick some other directory and 
create a sub-task for yourself. All I'm really doing here is using sub-tasks to 
divvy up the work.

It'll be a few days before I'm ready to tackle any more, I've got a couple of 
other cleanups to get done first to preserve some of the work for SOLR-10810...

> Address precommit WARNINGS
> --
>
> Key: SOLR-10778
> URL: https://issues.apache.org/jira/browse/SOLR-10778
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 4.6
>Reporter: Andrew Musselman
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: dated-warnings, dated-warnings.log, notclosed.txt
>
>
> During precommit we are seeing lots of warnings. Im turning this into an 
> umbrella issue about getting precommit warnings out of the code in general. 
> Yes, this will take a while.
> See SOLR-10809 for getting all warnings out of solr/core. I want to 
> selectively have precommit fail when "some part" of the code is clean so we 
> don't backslide, and solr/core was the finest granularity I could see how to 
> change.
> If you read more of the comments here, you can see that there are some 
> serious code refactoring that could be done. I'm electing to simply 
> SuppressWarnings rather than re-arrange code at this point whenever the code 
> is tricky. If anyone goes back in and tries to clean the code up, then can 
> remove the annotation(s).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14480) Get a clean compile of solr/core/api

2020-05-13 Thread Erick Erickson (Jira)
Erick Erickson created SOLR-14480:
-

 Summary: Get a clean compile of solr/core/api
 Key: SOLR-14480
 URL: https://issues.apache.org/jira/browse/SOLR-14480
 Project: Solr
  Issue Type: Sub-task
  Components: Build
Reporter: Erick Erickson
Assignee: Atri Sharma


[~atri] Here's one for you!

Here's how I'd like to approach this:
 * Let's start with solr/core, one subdirectory at a time.
 * See SOLR-14474 for how we want to address auxiliary classes, especially the 
question to move them to their own file or nest them. It'll be fuzzy until we 
get some more experience.
 * Let's just clean everything up _except_ deprecations. My thinking here is 
that there will be a bunch of code changes that we can/should backport to 8x to 
clean up the warnings. Deprecations will be (probably) 9.0 only so there'll be 
fewer problems with maintaining the two branches if we leave deprecations out 
of the mix for the present.
 * Err on the side of adding @SuppressWarnings rather than code changes for 
this phase. If it's reasonably safe to change the code (say by adding ) do 
so, but substantive changes are too likely to have unintended consequences. I'd 
like to reach a consensus on what changes are "safe", that'll probably be an 
ongoing discussion as we run into them for a while.
 * I expect there'll be a certain amount of stepping on each other's toes, no 
doubt to clean some things up in one of the subdirectories we'll have to change 
something in an ancestor directory, but we can deal with those as they come up, 
probably that'll just mean merging the current master with the fork we're 
working on...

Let me know what you think or if you'd like to change the approach.

Oh, and all I did here was take the second subdirectory of solr/core that I 
found, feel free to take on something else.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14479) Get a clean compile of solr/core/analysis

2020-05-13 Thread Erick Erickson (Jira)
Erick Erickson created SOLR-14479:
-

 Summary: Get a clean compile of solr/core/analysis
 Key: SOLR-14479
 URL: https://issues.apache.org/jira/browse/SOLR-14479
 Project: Solr
  Issue Type: Sub-task
  Components: Build
Reporter: Erick Erickson
Assignee: Gus Heck


[~gus] Ask and ye shall receive.

Here's how I'd like to approach this:
 * Let's start with solr/core, one subdirectory at a time.
 * See SOLR-14474 for how we want to address auxiliary classes, especially the 
question to move them to their own file or nest them. It'll be fuzzy until we 
get some more experience.
 * Let's just clean everything up _except_ deprecations. My thinking here is 
that there will be a bunch of code changes that we can/should backport to 8x to 
clean up the warnings. Deprecations will be (probably) 9.0 only so there'll be 
fewer problems with maintaining the two branches if we leave deprecations out 
of the mix for the present.
 * Err on the side of adding @SuppressWarnings rather than code changes for 
this phase. If it's reasonably safe to change the code (say by adding ) do 
so, but substantive changes are too likely to have unintended consequences. I'd 
like to reach a consensus on what changes are "safe", that'll probably be an 
ongoing discussion as we run into them for a while.
 * I expect there'll be a certain amount of stepping on each other's toes, no 
doubt to clean some things up in one of the subdirectories we'll have to change 
something in an ancestor directory, but we can deal with those as they come up, 
probably that'll just mean merging the current master with the fork we're 
working on...

Let me know what you think or if you'd like to change the approach.

Oh, and all I did here was take the first subdirectory of solr/core that I 
found, feel free to take on something else.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13268) Clean up any test failures resulting from defaulting to async logging

2020-05-13 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106376#comment-17106376
 ] 

Erick Erickson commented on SOLR-13268:
---

The upgrade to a more recent log4j (SOLR-14466) didn't solve the problem, I've 
seen some more reports since then. So the upgrade wasn't a magic bullet. Rats.

> Clean up any test failures resulting from defaulting to async logging
> -
>
> Key: SOLR-13268
> URL: https://issues.apache.org/jira/browse/SOLR-13268
> Project: Solr
>  Issue Type: Bug
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-13268-flushing.patch, SOLR-13268.patch, 
> SOLR-13268.patch, SOLR-13268.patch
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> This is a catch-all for test failures due to the async logging changes. So 
> far, the I see a couple failures on JDK13 only. I'll collect a "starter set" 
> here, these are likely systemic, once the root cause is found/fixed, then 
> others are likely fixed as well.
> JDK13:
> ant test  -Dtestcase=TestJmxIntegration -Dtests.seed=54B30AC62A2D71E 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=lv-LV 
> -Dtests.timezone=Asia/Riyadh -Dtests.asserts=true -Dtests.file.encoding=UTF-8
> ant test  -Dtestcase=TestDynamicURP -Dtests.seed=54B30AC62A2D71E 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=rwk 
> -Dtests.timezone=Australia/Brisbane -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-13 Thread Joel Bernstein (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106356#comment-17106356
 ] 

Joel Bernstein commented on SOLR-11934:
---

Looks good!

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-13 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106330#comment-17106330
 ] 

Erick Erickson commented on SOLR-11934:
---

Joel: Yeah I wondered about that too. I was going to get snarky about anyone 
who named their collection "mycolllection_shard32_notmyproblem" ;)

Cassandra: OK, I'll make that happen.

Both, how does this look?

2020-05-13 14:02:54.898 INFO 
(searcherExecutor-27-thread-1-processing-n:localhost:8981_solr 
x:eoe_shard1_replica_n1 c:eoe s:shard1 r:core_node2) [c:eoe s:shard1 
r:core_node2 x:eoe_shard1_replica_n1] o.a.s.c.SolrCore [eoe_shard1_replica_n1] 
Registered new searcher autowarm time: 0 ms: Collection: 'eoe'

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-13 Thread Joel Bernstein (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106319#comment-17106319
 ] 

Joel Bernstein commented on SOLR-11934:
---

I'm also wondering if there are situations where the core name might not 
conform to this pattern?

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14407) Handle shards.purpose in the postlogs tool

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14407:
--
Attachment: SOLR-14407.patch

> Handle shards.purpose in the postlogs tool 
> ---
>
> Key: SOLR-14407
> URL: https://issues.apache.org/jira/browse/SOLR-14407
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-14407.patch
>
>
> This ticket will add the *purpose_ss* field to query type log records that 
> have a *shards.purpose* request parameter. This can be used to gather timing 
> and count information for the different parts of the distributed search. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14407) Handle shards.purpose in the postlogs tool

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14407:
--
Description: This ticket will add the *purpose_ss* field to query type log 
records that have a *shards.purpose* request parameter. This can be used to 
gather timing and count information for the different parts of the distributed 
search.   (was: This ticket will add the purpose_ss field to query records that 
have a *shards.purpose* request parameter. This can be used to gather timing 
and count information for the different parts of the distributed search.)

> Handle shards.purpose in the postlogs tool 
> ---
>
> Key: SOLR-14407
> URL: https://issues.apache.org/jira/browse/SOLR-14407
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the *purpose_ss* field to query type log records that 
> have a *shards.purpose* request parameter. This can be used to gather timing 
> and count information for the different parts of the distributed search. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14407) Handle shards.purpose in the postlogs tool

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14407:
--
Description: This ticket will add the purpose_ss field to query records 
that have a *shards.purpose* request parameter. This can be used to gather 
timing and count information for the different parts of the distributed search. 
 (was: This ticket will add the purpose_ss field)

> Handle shards.purpose in the postlogs tool 
> ---
>
> Key: SOLR-14407
> URL: https://issues.apache.org/jira/browse/SOLR-14407
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the purpose_ss field to query records that have a 
> *shards.purpose* request parameter. This can be used to gather timing and 
> count information for the different parts of the distributed search.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14407) Handle shards.purpose in the postlogs tool

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14407:
--
Description: This ticket will add the purpose_ss field  (was: The postlogs 
tool currently has a *type_s* field which describes the type of request (query, 
update, commit, newSearcher, error, admin etc...). This ticket will add a 
*subtype_s* field to differentiate the logs records that appear within the 
specific types. Initially this will focus on subtypes of the *query* type which 
will include top, shard, ids and facet_refine. )

> Handle shards.purpose in the postlogs tool 
> ---
>
> Key: SOLR-14407
> URL: https://issues.apache.org/jira/browse/SOLR-14407
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> This ticket will add the purpose_ss field



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14407) Handle shards.purpose in the postlogs tool

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14407:
--
Summary: Handle shards.purpose in the postlogs tool   (was: Add subtypes to 
the postlogs tool )

> Handle shards.purpose in the postlogs tool 
> ---
>
> Key: SOLR-14407
> URL: https://issues.apache.org/jira/browse/SOLR-14407
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> The postlogs tool currently has a *type_s* field which describes the type of 
> request (query, update, commit, newSearcher, error, admin etc...). This 
> ticket will add a *subtype_s* field to differentiate the logs records that 
> appear within the specific types. Initially this will focus on subtypes of 
> the *query* type which will include top, shard, ids and facet_refine. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-13 Thread Cassandra Targett (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106310#comment-17106310
 ] 

Cassandra Targett commented on SOLR-11934:
--

bq. Couldn't the log analytics cut off everything after the _shard* when doing 
its analysis?

The log analytics happen after the parsing of the logs and indexing of the 
records with bin/postlogs (analytics is done by essentially querying indexed 
log records in various ways) and it's the parsing that needs to be able to 
separate the elements of each log record into the different fields. Making the 
parser create a new field by cutting part of another for this particular record 
type is likely pretty do-able but a better alternative would be to avoid parser 
logic complication by just printing the helpful fields in the log record 
whenever possible.

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-6203) cast exception while searching with sort function and result grouping

2020-05-13 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-6203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106309#comment-17106309
 ] 

Lucene/Solr QA commented on SOLR-6203:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 16s{color} 
| {color:red} SOLR-6203 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/solr/HowToContribute#Creating_the_patch_file for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-6203 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12978434/SOLR-6203.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/746/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> cast exception while searching with sort function and result grouping
> -
>
> Key: SOLR-6203
> URL: https://issues.apache.org/jira/browse/SOLR-6203
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.7, 4.8
>Reporter: Nate Dire
>Priority: Major
> Attachments: README, SOLR-6203-unittest.patch, 
> SOLR-6203-unittest.patch, SOLR-6203.patch, SOLR-6203.patch, SOLR-6203.patch, 
> SOLR-6203.patch, SOLR-6203.patch, SOLR-6203.patch, SOLR-6203.patch, 
> SOLR-6203.patch, SOLR-6203.patch
>
>
> After upgrading from 4.5.1 to 4.7+, a schema including a {{"*"}} dynamic 
> field as text gets a cast exception when using a sort function and result 
> grouping.  
> Repro (with example config):
> # Add {{"*"}} dynamic field as a {{TextField}}, eg:
> {noformat}
> 
> {noformat}
> #  Create  sharded collection
> {noformat}
> curl 
> 'http://localhost:8983/solr/admin/collections?action=CREATE=test=2=2'
> {noformat}
> # Add example docs (query must have some results)
> # Submit query which sorts on a function result and uses result grouping:
> {noformat}
> {
>   "responseHeader": {
> "status": 500,
> "QTime": 50,
> "params": {
>   "sort": "sqrt(popularity) desc",
>   "indent": "true",
>   "q": "*:*",
>   "_": "1403709010008",
>   "group.field": "manu",
>   "group": "true",
>   "wt": "json"
> }
>   },
>   "error": {
> "msg": "java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef",
> "code": 500
>   }
> }
> {noformat}
> Source exception from log:
> {noformat}
> ERROR - 2014-06-25 08:10:10.055; org.apache.solr.common.SolrException; 
> java.lang.ClassCastException: java.lang.Double cannot be cast to 
> org.apache.lucene.util.BytesRef
> at 
> org.apache.solr.schema.FieldType.marshalStringSortValue(FieldType.java:981)
> at org.apache.solr.schema.TextField.marshalSortValue(TextField.java:176)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.serializeSearchGroup(SearchGroupsResultTransformer.java:125)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:65)
> at 
> org.apache.solr.search.grouping.distributed.shardresultserializer.SearchGroupsResultTransformer.transform(SearchGroupsResultTransformer.java:43)
> at 
> org.apache.solr.search.grouping.CommandHandler.processResult(CommandHandler.java:193)
> at 
> org.apache.solr.handler.component.QueryComponent.process(QueryComponent.java:340)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   ...
> {noformat}
> It looks like {{serializeSearchGroup}} is matching the sort expression as the 
> {{"*"}} dynamic field, which is a TextField in the repro.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-13 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106306#comment-17106306
 ] 

Erick Erickson commented on SOLR-14105:
---

Solr 8.6 (unreleased) has already been upgraded to Jetty 9.4.27 FWIW. There 
were no code changes, see SOLR-14386. Our release process produces a bunch of 
.sha1 files, so it looks like a bigger change than it was.

https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=de32d0d

> Http2SolrClient SSL not working in branch_8x
> 
>
> Key: SOLR-14105
> URL: https://issues.apache.org/jira/browse/SOLR-14105
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-14105.patch
>
>
> In branch_8x we upgraded to Jetty 9.4.24. This causes the following 
> exceptions when attempting to start server with SSL:
> {noformat}
> 2019-12-17 14:46:16.646 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: Error instantiating 
> shardHandlerFactory class [HttpShardHandlerFactory]: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:633)
> ...
> Caused by: java.lang.RuntimeException: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:224)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:154)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
>   at 
> org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:321)
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:51)
>   ... 50 more
> Caused by: java.lang.UnsupportedOperationException: X509ExtendedKeyManager 
> only supported on Server
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.newSniX509ExtendedKeyManager(SslContextFactory.java:1273)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.getKeyManagers(SslContextFactory.java:1255)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:374)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-13 Thread Simone Bordet (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106299#comment-17106299
 ] 

Simone Bordet commented on SOLR-14105:
--

[~janhoy] as I said, the issues are fixed in Jetty 9.4.25+. We have tests that 
verify the fix, so it's either another issue, or an edge case. Please report on 
the Jetty issue the exact details.

> Http2SolrClient SSL not working in branch_8x
> 
>
> Key: SOLR-14105
> URL: https://issues.apache.org/jira/browse/SOLR-14105
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-14105.patch
>
>
> In branch_8x we upgraded to Jetty 9.4.24. This causes the following 
> exceptions when attempting to start server with SSL:
> {noformat}
> 2019-12-17 14:46:16.646 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: Error instantiating 
> shardHandlerFactory class [HttpShardHandlerFactory]: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:633)
> ...
> Caused by: java.lang.RuntimeException: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:224)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:154)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
>   at 
> org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:321)
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:51)
>   ... 50 more
> Caused by: java.lang.UnsupportedOperationException: X509ExtendedKeyManager 
> only supported on Server
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.newSniX509ExtendedKeyManager(SslContextFactory.java:1273)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.getKeyManagers(SslContextFactory.java:1255)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:374)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14474) Fix auxilliary class warnings in Solr core

2020-05-13 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106288#comment-17106288
 ] 

Erick Erickson commented on SOLR-14474:
---

Here's a trick. The default settings stop issuing warnings at 100. Let's say 
you have a class you're working on that is not shown because it has warnings 
230 - 250. Change gradle/defaults-java.gradle and add *-Xmaxwarns", "1"*, 
to the *options.compilerArgs += [* section. Then you can search the output 
window (at least in IntelliJ) for your class and click on the warnings and be 
taken to the offending line of code. I usually work backwards so the line 
numbers match up

Don't check that change to defaults-java.gradle in of course, although it would 
just produce lots of output...

> Fix auxilliary class warnings in Solr core
> --
>
> Key: SOLR-14474
> URL: https://issues.apache.org/jira/browse/SOLR-14474
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> We have quite a number of situations where multiple classes are declared in a 
> single source file, which is a poor practice. I ran across a bunch of these 
> in solr/core, and [~mdrob] fixed some of these in SOLR-14426. [~dsmiley] 
> looked at those and thought that it would have been better to just move a 
> particular class to its own file. And [~uschindler] do you have any comments?
> I have a fork with a _bunch_ of changes to get warnings out that include 
> moving more than a few classes into static inner classes, including the one 
> Mike did. I do NOT intend to commit this, it's too big/sprawling, but it does 
> serve to show a variety of situations. See: 
> https://github.com/ErickErickson/lucene-solr/tree/jira/SOLR-10810 for how 
> ugly it all looks. I intend to break this wodge down into smaller tasks and 
> start over now that I have a clue as to the scope. And do ignore the generics 
> changes as well as the consequences of upgrading apache commons CLI, those 
> need to be their own JIRA.
> What I'd like to do is agree on some guidelines for when to move classes to 
> their own file and when to move them to static inner classes.
> Some things I saw, reference the fork for the changes (again, I won't check 
> that in).
> 1> DocValuesAcc has no fewer than 9 classes that could be moved inside the 
> main class. But they all become "static abstract". And take 
> "DoubleSortedNumericDVAcc" in that class, It gets extended over in 4 other 
> files. How would all that get resolved? How many of them would people 
> recommend moving into their own files? Do we want to proliferate all those? 
> And so on with all the other plethora of classes in 
> org.apache.solr.search.facet.
> This is particularly thorny because the choices would be about a zillion new 
> classes or about a zillion edits.
> Does the idea of abstract .vs. concrete classes make any difference? IOW, if 
> we change an abstract class to a nested class, then maybe we just have to 
> change the class(es) that extend it?
> 2> StatsComponent.StatsInfo probably should be its own file?
> 3> FloatCmp, LongCmp, DoubleCmp all declare classes with "Comp" rather than 
> "Cmp". Those files should just be renamed.
> 4> JSONResponseWriter. ???
> 5> FacetRangeProcessor seems like it needs its own class
> 6> FacetRequestSorted seems like it needs its own class
> 7> FacetModule
> So what I'd like going forward is to agree on some guidelines to resolve 
> whether to move a class to its own file or make it nested (probably static). 
> Not hard-and-fast rules, just something to cut down on the rework due to 
> objections.
> And what about backporting to 8x? My suggestion is to backport what's 
> easy/doesn't break back-compat in order to make keeping the two branches in 
> sync.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9365) Fuzzy query has a false negative when prefix length == search term length

2020-05-13 Thread Michael McCandless (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106265#comment-17106265
 ] 

Michael McCandless commented on LUCENE-9365:


{quote}
bq. so +1 to make FuzzyQuery lenient to these cases and rewrite itself to 
PrefixQuery or RegexpQuery instead.

Would this mean we need to add a max length option to PrefixQuery?
{quote}

OK, let me narrow my +1 a bit ;)

I'm +1 to having {{FuzzyQuery}} be lenient by allowing this strange case where 
{{prefix == term.text().length()}} and implementing it "correctly", to make it 
less trappy for users.

But I'm less clear on how exactly we should implement that.  You're right, if 
we rewrite to {{PrefixQuery}} then we must then add a max length option to it.  
Maybe that is indeed a useful option to expose publicly to {{PrefixQuery}} 
users?  That would let users cap how many characters are allowed after the 
prefix.

Alternatively, we could just rewrite to an anonymous {{AutomatonQuery}} that 
accepts precisely the term as prefix, and then at most {{edit-distance}} 
additional arbitrary characters?

I'm not sure which approach is better ... I think I would favor the first 
option.

> Fuzzy query has a false negative when prefix length == search term length 
> --
>
> Key: LUCENE-9365
> URL: https://issues.apache.org/jira/browse/LUCENE-9365
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Reporter: Mark Harwood
>Priority: Major
>
> When using FuzzyQuery the search string `bba` does not match doc value `bbab` 
> with an edit distance of 1 and prefix length of 3.
> In FuzzyQuery an automaton is created for the "suffix" part of the search 
> string which in this case is an empty string.
> In this scenario maybe the FuzzyQuery should rewrite to a WildcardQuery of 
> the following form :
> {code:java}
> searchString + "?" 
> {code}
> .. where there's an appropriate number of ? characters according to the edit 
> distance.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12823) remove clusterstate.json in Lucene/Solr 8.0

2020-05-13 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106262#comment-17106262
 ] 

Erick Erickson commented on SOLR-12823:
---

[~murblanc] Conceptually, legacyCloud should be removed too. I haven't looked 
to see how difficult that would be. Theoretically it shouldn't be hard.

re: Perhaps add a check in Overseer and if the old znode exists and is 
non-empty, print an ERROR and abort?

First, I like the "fail fast" nature of this, we either have to do something 
like this or deal with questions about "where did replicas go?" if we just 
ignored clusterstate.json. There are certainly situations I've seen where users 
have some of their data in state.json for the collection and some in 
clusterstate.json. Admittedly they had to work to get there ;).  If a user was 
in this situation however, what could they do to get going with Solr 9? If they 
nuked clusterstate.json, the information would be lost. I suppose they could 
hand-edit the individual state.json files, but that's difficult/error prone.

I'm not necessarily saying we have to do anything special here to handle this 
case, just making sure we've considered the possibility. Personally I'd be fine 
with an upgrade note saying "before upgrading to Solr 9, insure you have no 
data in clusterstate.json. If you do, use the MIGRATESTATEFORMAT command on 
your pre Solr 9 installation before upgrading".

> remove clusterstate.json in Lucene/Solr 8.0
> ---
>
> Key: SOLR-12823
> URL: https://issues.apache.org/jira/browse/SOLR-12823
> Project: Solr
>  Issue Type: Task
>Reporter: Varun Thacker
>Priority: Major
>
> clusterstate.json is an artifact of a pre 5.0 Solr release. We should remove 
> that in 8.0
> It stays empty unless you explicitly ask to create the collection with the 
> old "stateFormat" and there is no reason for one to create a collection with 
> the old stateFormat.
> We should also remove the "stateFormat" argument in create collection
> We should also remove MIGRATESTATEVERSION as well
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106234#comment-17106234
 ] 

Jan Høydahl commented on SOLR-14105:


I think there may several overlapping issues Solr users are facing here. You 
have the one Tim reported about multi host. You also have the issue when you 
actually do mutual TLS and Jetty will not accept a multi-host self signed cert 
as client cert even if it is the only cert in the keystore.

> Http2SolrClient SSL not working in branch_8x
> 
>
> Key: SOLR-14105
> URL: https://issues.apache.org/jira/browse/SOLR-14105
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-14105.patch
>
>
> In branch_8x we upgraded to Jetty 9.4.24. This causes the following 
> exceptions when attempting to start server with SSL:
> {noformat}
> 2019-12-17 14:46:16.646 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: Error instantiating 
> shardHandlerFactory class [HttpShardHandlerFactory]: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:633)
> ...
> Caused by: java.lang.RuntimeException: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:224)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:154)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
>   at 
> org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:321)
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:51)
>   ... 50 more
> Caused by: java.lang.UnsupportedOperationException: X509ExtendedKeyManager 
> only supported on Server
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.newSniX509ExtendedKeyManager(SslContextFactory.java:1273)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.getKeyManagers(SslContextFactory.java:1255)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:374)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-13 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106230#comment-17106230
 ] 

Erick Erickson commented on SOLR-11934:
---

[~jbernste] So you need  *production_cv_month_201912* in your example broken 
out separately, right? Looking at that log line, there's no real purpose served 
by printing out *production_cv_month_201912_shard35_replica_n1* twice, although 
altering either one might pop out weirdly. Couldn't the log analytics cut off 
everything after the *_shard** when doing its analysis? 

 

But it would be trivial to add the collection name with: 
*{color:#00}newSearcher{color}.getCore().getCoreDescriptor().getCollectionName()*

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-13 Thread Simone Bordet (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106229#comment-17106229
 ] 

Simone Bordet commented on SOLR-14105:
--

[~janhoy] perhaps I don't understand the use case here?

If you are self-connecting and the server keystore contains a self-signed 
certificate, then it should be enough to create a `new 
SslContextFactory.Client(true)` which does not do any certificate validation.

If you are requesting client-side certificate authentication (i.e. 
`needsClientAuth=true` on server), then the client keystore must be setup 
properly and as such it was unlikely it ever worked with a server keystore.

The client does not "pick" a certificate, normally: it just validates the one 
sent by the server.

Most of the times you can get by _without_ a client keystore (for example when 
connecting to servers that send certificates that are valid and signed by a CA 
root).

If you explain what's your use case exactly, we can be more specific.

> Http2SolrClient SSL not working in branch_8x
> 
>
> Key: SOLR-14105
> URL: https://issues.apache.org/jira/browse/SOLR-14105
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-14105.patch
>
>
> In branch_8x we upgraded to Jetty 9.4.24. This causes the following 
> exceptions when attempting to start server with SSL:
> {noformat}
> 2019-12-17 14:46:16.646 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: Error instantiating 
> shardHandlerFactory class [HttpShardHandlerFactory]: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:633)
> ...
> Caused by: java.lang.RuntimeException: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:224)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:154)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
>   at 
> org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:321)
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:51)
>   ... 50 more
> Caused by: java.lang.UnsupportedOperationException: X509ExtendedKeyManager 
> only supported on Server
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.newSniX509ExtendedKeyManager(SslContextFactory.java:1273)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.getKeyManagers(SslContextFactory.java:1255)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:374)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106221#comment-17106221
 ] 

Jan Høydahl edited comment on SOLR-14105 at 5/13/20, 11:43 AM:
---

Thanks Simone. You did not quote me correctly. I said "..*seems* a bit 
incomplete and trappy", and that comment was meant for 9.4.24 that we use, and 
it took us several iterations to get the Server/Client split right.

Again, a workaround is to specify a separate SOLR_SSL_CLIENT_KEY_STORE.

I think it is very hard to follow the GitHub issues/PRs you link to, so even 
after reading them, I did not understand that 9.4.25 actually allows multi 
certs even on the client side? This was the behaviour we had in Solr before 
upgrading from 9.4.19 to 9.4.24 - Jetty would pick the first cert on the 
keystore instead of throwing an exception. What is the new selection logic 
introduced in 9.4.25 (when we use  SslContextFactory.Client)?

Sounds like Solr should anyway upgrade Jetty!


was (Author: janhoy):
Thanks Simone. You did not quote me correctly. I said "..*seems* a bit 
incomplete and trappy", and that comment is for 9.4.14 that we use.

Again, a workaround is to specify a separate SOLR_SSL_CLIENT_KEY_STORE.

I think it is very hard to follow the GitHub issues/PRs you link to, so even 
after reading them, I did not understand that 9.4.25 actually allows multi 
certs even on the client side? This was the behaviour we had in Solr before 
upgrading from 9.4.19 to 9.4.24 - Jetty would pick the first cert on the 
keystore instead of throwing an exception. What is the new selection logic 
introduced in 9.4.25 (when we use  SslContextFactory.Client)?

Sounds like Solr should anyway upgrade Jetty!

> Http2SolrClient SSL not working in branch_8x
> 
>
> Key: SOLR-14105
> URL: https://issues.apache.org/jira/browse/SOLR-14105
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-14105.patch
>
>
> In branch_8x we upgraded to Jetty 9.4.24. This causes the following 
> exceptions when attempting to start server with SSL:
> {noformat}
> 2019-12-17 14:46:16.646 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: Error instantiating 
> shardHandlerFactory class [HttpShardHandlerFactory]: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:633)
> ...
> Caused by: java.lang.RuntimeException: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:224)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:154)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
>   at 
> org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:321)
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:51)
>   ... 50 more
> Caused by: java.lang.UnsupportedOperationException: X509ExtendedKeyManager 
> only supported on Server
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.newSniX509ExtendedKeyManager(SslContextFactory.java:1273)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.getKeyManagers(SslContextFactory.java:1255)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:374)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-13 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106221#comment-17106221
 ] 

Jan Høydahl commented on SOLR-14105:


Thanks Simone. You did not quote me correctly. I said "..*seems* a bit 
incomplete and trappy", and that comment is for 9.4.14 that we use.

Again, a workaround is to specify a separate SOLR_SSL_CLIENT_KEY_STORE.

I think it is very hard to follow the GitHub issues/PRs you link to, so even 
after reading them, I did not understand that 9.4.25 actually allows multi 
certs even on the client side? This was the behaviour we had in Solr before 
upgrading from 9.4.19 to 9.4.24 - Jetty would pick the first cert on the 
keystore instead of throwing an exception. What is the new selection logic 
introduced in 9.4.25 (when we use  SslContextFactory.Client)?

Sounds like Solr should anyway upgrade Jetty!

> Http2SolrClient SSL not working in branch_8x
> 
>
> Key: SOLR-14105
> URL: https://issues.apache.org/jira/browse/SOLR-14105
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-14105.patch
>
>
> In branch_8x we upgraded to Jetty 9.4.24. This causes the following 
> exceptions when attempting to start server with SSL:
> {noformat}
> 2019-12-17 14:46:16.646 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: Error instantiating 
> shardHandlerFactory class [HttpShardHandlerFactory]: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:633)
> ...
> Caused by: java.lang.RuntimeException: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:224)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:154)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
>   at 
> org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:321)
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:51)
>   ... 50 more
> Caused by: java.lang.UnsupportedOperationException: X509ExtendedKeyManager 
> only supported on Server
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.newSniX509ExtendedKeyManager(SslContextFactory.java:1273)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.getKeyManagers(SslContextFactory.java:1255)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:374)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14478:
--
Description: Currently the *diff* function performs *serial differencing* 
on a numeric vector. This ticket will allow the diff function to perform serial 
differencing on all the rows of a *matrix*. This will make it easy to perform 
*correlations* on a matrix of *differenced time series vectors* using math 
expressions.  (was: Currently the *diff* function performs *serial 
differencing* on a numeric vector. This ticket will allow the diff function to 
perform serial differencing on all the rows of a *matrix*. This will make it 
easy to perform *correlations* on *differenced time series matrices* using math 
expressions.)

> Allow the diff Stream Evaluator to operate on the rows of a matrix
> --
>
> Key: SOLR-14478
> URL: https://issues.apache.org/jira/browse/SOLR-14478
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> Currently the *diff* function performs *serial differencing* on a numeric 
> vector. This ticket will allow the diff function to perform serial 
> differencing on all the rows of a *matrix*. This will make it easy to perform 
> *correlations* on a matrix of *differenced time series vectors* using math 
> expressions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-14478:
-

Assignee: Joel Bernstein

> Allow the diff Stream Evaluator to operate on the rows of a matrix
> --
>
> Key: SOLR-14478
> URL: https://issues.apache.org/jira/browse/SOLR-14478
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
>
> Currently the *diff* function performs *serial differencing* on a numeric 
> vector. This ticket will allow the diff function to perform serial 
> differencing on all the rows of a *matrix*. This will make it easy to perform 
> *correlations* on *differenced time series matrices* using math expressions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14478:
--
Summary: Allow the diff Stream Evaluator to operate on the rows of a matrix 
 (was: Allow diff Stream Evaluator to operate on a matrix)

> Allow the diff Stream Evaluator to operate on the rows of a matrix
> --
>
> Key: SOLR-14478
> URL: https://issues.apache.org/jira/browse/SOLR-14478
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> Currently the *diff* function performs serial differencing on a numeric 
> vector. This ticket will allow the diff function to perform serial 
> differencing on all the rows of a *matrix*. This will make it easy to perform 
> *correlations* on *differenced time series matrices* using math expressions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14478) Allow the diff Stream Evaluator to operate on the rows of a matrix

2020-05-13 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14478:
--
Description: Currently the *diff* function performs *serial differencing* 
on a numeric vector. This ticket will allow the diff function to perform serial 
differencing on all the rows of a *matrix*. This will make it easy to perform 
*correlations* on *differenced time series matrices* using math expressions.  
(was: Currently the *diff* function performs serial differencing on a numeric 
vector. This ticket will allow the diff function to perform serial differencing 
on all the rows of a *matrix*. This will make it easy to perform *correlations* 
on *differenced time series matrices* using math expressions.)

> Allow the diff Stream Evaluator to operate on the rows of a matrix
> --
>
> Key: SOLR-14478
> URL: https://issues.apache.org/jira/browse/SOLR-14478
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
>
> Currently the *diff* function performs *serial differencing* on a numeric 
> vector. This ticket will allow the diff function to perform serial 
> differencing on all the rows of a *matrix*. This will make it easy to perform 
> *correlations* on *differenced time series matrices* using math expressions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14478) Allow diff Stream Evaluator to operate on a matrix

2020-05-13 Thread Joel Bernstein (Jira)
Joel Bernstein created SOLR-14478:
-

 Summary: Allow diff Stream Evaluator to operate on a matrix
 Key: SOLR-14478
 URL: https://issues.apache.org/jira/browse/SOLR-14478
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: streaming expressions
Reporter: Joel Bernstein


Currently the *diff* function performs serial differencing on a numeric vector. 
This ticket will allow the diff function to perform serial differencing on all 
the rows of a *matrix*. This will make it easy to perform *correlations* on 
*differenced time series matrices* using math expressions.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-05-13 Thread Simone Bordet (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106214#comment-17106214
 ] 

Simone Bordet commented on SOLR-14105:
--

[~janhoy] perhaps it's a bit harsh to say that "the Jetty SSL code is 
incomplete and trappy".

Feel free to open an issue to describe what's incomplete and what's trappy, and 
we'll fix it.

 

We have responded to the issue opened by [~ttaranov] and we have, to my 
knowledge, already fixed the issue in Jetty 9.4.25.

If using Jetty 9.4.25 or later in Solr does not fix the issue, let's work out 
the details together.

 

Using on the client a keystore that is meant for servers, containing multiple 
certificates, multiple aliases, etc. is probably not best - although common 
practice especially for testing or in known situations (like the Solr 
self-connect).

Having said that, Jetty must work on the client with a server keystore - and 
that's fixed in Jetty 9.4.25. Again, if that not the case, tell us more details.

 

Feel free to comment on the Jetty issue. We are about to release Jetty 9.4.29, 
but willing to hold it if you still have problems with Solr.

 

Thanks!

> Http2SolrClient SSL not working in branch_8x
> 
>
> Key: SOLR-14105
> URL: https://issues.apache.org/jira/browse/SOLR-14105
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-14105.patch
>
>
> In branch_8x we upgraded to Jetty 9.4.24. This causes the following 
> exceptions when attempting to start server with SSL:
> {noformat}
> 2019-12-17 14:46:16.646 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: Error instantiating 
> shardHandlerFactory class [HttpShardHandlerFactory]: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:633)
> ...
> Caused by: java.lang.RuntimeException: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:224)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:154)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
>   at 
> org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:321)
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:51)
>   ... 50 more
> Caused by: java.lang.UnsupportedOperationException: X509ExtendedKeyManager 
> only supported on Server
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.newSniX509ExtendedKeyManager(SslContextFactory.java:1273)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.getKeyManagers(SslContextFactory.java:1255)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:374)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12823) remove clusterstate.json in Lucene/Solr 8.0

2020-05-13 Thread Noble Paul (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106212#comment-17106212
 ] 

Noble Paul commented on SOLR-12823:
---

The {{stateFormat}} attribute was introduced to split the cluster state . The 
assumption was that we will use that as a generic version number to make 
changes to state. It's almost 6 yrs and there was no need for that. probably, 
we should just get rid of it altogether

> remove clusterstate.json in Lucene/Solr 8.0
> ---
>
> Key: SOLR-12823
> URL: https://issues.apache.org/jira/browse/SOLR-12823
> Project: Solr
>  Issue Type: Task
>Reporter: Varun Thacker
>Priority: Major
>
> clusterstate.json is an artifact of a pre 5.0 Solr release. We should remove 
> that in 8.0
> It stays empty unless you explicitly ask to create the collection with the 
> old "stateFormat" and there is no reason for one to create a collection with 
> the old stateFormat.
> We should also remove the "stateFormat" argument in create collection
> We should also remove MIGRATESTATEVERSION as well
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2020-05-13 Thread Joel Bernstein (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17106207#comment-17106207
 ] 

Joel Bernstein commented on SOLR-11934:
---

Here is a sample log record for a new searcher:

 
{code:java}
2019-12-16 19:00:23.931 INFO  (searcherExecutor-66-thread-1) [   ] 
o.a.s.c.SolrCore [production_cv_month_201912_shard35_replica_n1] Registered new 
searcher Searcher@16ef5fac[production_cv_month_201912_shard35_replica_n1] ...
 {code}

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



  1   2   >