[jira] [Created] (SOLR-14531) Refactor out internode requests from HttpShardHandler

2020-06-01 Thread Noble Paul (Jira)
Noble Paul created SOLR-14531:
-

 Summary: Refactor out internode requests from HttpShardHandler
 Key: SOLR-14531
 URL: https://issues.apache.org/jira/browse/SOLR-14531
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul
Assignee: Noble Paul






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14530) Eliminate overuse of ShardHandler interface

2020-06-01 Thread Noble Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-14530:
--
Description: {{ShardHandler}} is a complicated interface . It is being used 
in many places to make a simple HTTP request to another node. This makes the 
code complex. Most of the time a simple synchronous request is all we need.  
(was: {{ShardHandler}} is a complicates interface . It is being used in many 
places to make a simple HTTP request to another node. This makes the code 
complex. Most of the time a simple synchronous request is all we need.)

> Eliminate overuse of ShardHandler interface
> ---
>
> Key: SOLR-14530
> URL: https://issues.apache.org/jira/browse/SOLR-14530
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> {{ShardHandler}} is a complicated interface . It is being used in many places 
> to make a simple HTTP request to another node. This makes the code complex. 
> Most of the time a simple synchronous request is all we need.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14525) For components loaded from packages SolrCoreAware, ResourceLoaderAware are not honored

2020-06-01 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123366#comment-17123366
 ] 

ASF subversion and git services commented on SOLR-14525:


Commit a753b88713364d4fdd4158b78d05612f4b27a432 in lucene-solr's branch 
refs/heads/branch_8x from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a753b88 ]

SOLR-14525: Test failure


> For components loaded from packages SolrCoreAware, ResourceLoaderAware are 
> not honored
> --
>
> Key: SOLR-14525
> URL: https://issues.apache.org/jira/browse/SOLR-14525
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: packages
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> inform() methods are not invoked if the plugins are loaded from packages



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14525) For components loaded from packages SolrCoreAware, ResourceLoaderAware are not honored

2020-06-01 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123364#comment-17123364
 ] 

ASF subversion and git services commented on SOLR-14525:


Commit 552f1940af3ac5f95bcb1f890ff6619ea9463313 in lucene-solr's branch 
refs/heads/master from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=552f194 ]

SOLR-14525: Test failure


> For components loaded from packages SolrCoreAware, ResourceLoaderAware are 
> not honored
> --
>
> Key: SOLR-14525
> URL: https://issues.apache.org/jira/browse/SOLR-14525
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: packages
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> inform() methods are not invoked if the plugins are loaded from packages



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14530) Eliminate overuse of ShardHandler interface

2020-06-01 Thread Noble Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-14530:
--
Summary: Eliminate overuse of ShardHandler interface  (was: Eliminate 
overuse of ShardHandler)

> Eliminate overuse of ShardHandler interface
> ---
>
> Key: SOLR-14530
> URL: https://issues.apache.org/jira/browse/SOLR-14530
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> {{ShardHandler}} is a complicates interface . It is being used in many places 
> to make a simple HTTP request to another node. This makes the code complex. 
> Most of the time a simple synchronous request is all we need.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14530) Eliminate overuse of ShardHandler

2020-06-01 Thread Noble Paul (Jira)
Noble Paul created SOLR-14530:
-

 Summary: Eliminate overuse of ShardHandler
 Key: SOLR-14530
 URL: https://issues.apache.org/jira/browse/SOLR-14530
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul
Assignee: Noble Paul


{{ShardHandler}} is a complicates interface . It is being used in many places 
to make a simple HTTP request to another node. This makes the code complex. 
Most of the time a simple synchronous request is all we need.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14520) json.facets: allBucket:true can cause server errors when combined with refine:true

2020-06-01 Thread Michael Gibney (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121522#comment-17121522
 ] 

Michael Gibney commented on SOLR-14520:
---

{quote}one concern i have that's still nagging me is how to reproduce the 2nd 
type of failure i mentioned above
{quote}
Yes, I'm not exactly sure. Looking just now, we only reach the problematic part 
of the code if {{facetInfo!=null}} _and_ {{skipFacet==false}}, which only 
happens for "partial" refinement buckets, which I don't understand well at the 
moment. Perhaps the key to reproducing might be to figure out how/when 
{{partialBuckets}} for subs are initially populated ({{"_p"}}), which it looks 
like happens in {{FacetRequestSortedMerger.getRefinement(...)}}? Although I 
don't understand it at the moment, it does look like it's indirectly determined 
by a bunch of conditional logic ... i.e., conditions that have to be satisfied 
in order for that part of the code to be exercised ... which at least could 
explain why it's hard to reproduce the problem? 

> json.facets: allBucket:true can cause server errors when combined with 
> refine:true
> --
>
> Key: SOLR-14520
> URL: https://issues.apache.org/jira/browse/SOLR-14520
> Project: Solr
>  Issue Type: Bug
>  Components: Facet Module
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-14520.patch, SOLR-14520.patch
>
>
> Another bug that was discovered while testing SOLR-14467...
> In some situations, using {{allBuckets:true}} in conjunction with 
> {{refine:true}} can cause server errors during the "refinement" requests to 
> the individual shards -- either NullPointerExceptions from some (nested) 
> SlotAccs when SpecialSlotAcc tries to collect them, or 
> ArrayIndexOutOfBoundsException from CountSlotArrAcc.incrementCount because 
> it's asked to collect to "large" slot# values even though it's been 
> initialized with a size of '1'
> NOTE: these problems may be specific to FacetFieldProcessorByArrayDV - i have 
> not yet seen similar failures from FacetFieldProcessorByArrayUIF (those are 
> the only 2 used when doing refinement) but that may just be a fluke of 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14419) Query DLS {"param":"ref"}

2020-06-01 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121515#comment-17121515
 ] 

Lucene/Solr QA commented on SOLR-14419:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
|| || || || {color:brown} master Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m  2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m  2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate ref guide {color} | 
{color:green}  0m  2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:black}{color} | {color:black} {color} | {color:black}  1m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-14419 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13004522/SOLR-14419-refguide.patch
 |
| Optional Tests |  ratsources  validatesourcepatterns  validaterefguide  |
| uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / e841d7625cc |
| ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 |
| modules | C: solr/solr-ref-guide U: solr/solr-ref-guide |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/758/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419-refguide.patch, SOLR-14419.patch, 
> SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14300) Some conditional clauses on unindexed field will be ignored by query parser in some specific cases

2020-06-01 Thread Hongtai Xue (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121492#comment-17121492
 ] 

Hongtai Xue commented on SOLR-14300:


2months passed, no body review this ticket...

please let me know if I missed something 

> Some conditional clauses on unindexed field will be ignored by query parser 
> in some specific cases
> --
>
> Key: SOLR-14300
> URL: https://issues.apache.org/jira/browse/SOLR-14300
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 7.3, 7.4, 7.5, 7.6, 7.7, 8.0, 8.1, 8.2, 8.3, 8.4
> Environment: Solr 7.3.1 
> centos7.5
>Reporter: Hongtai Xue
>Priority: Minor
>  Labels: newbie, patch
> Fix For: 7.3, 7.4, 7.5, 7.6, 7.7, 8.0, 8.1, 8.2, 8.3, 8.4
>
> Attachments: SOLR-14300.patch
>
>
> In some specific cases, some conditional clauses on unindexed field will be 
> ignored
>  * for query like, q=A:1 OR B:1 OR A:2 OR B:2
>  if field B is not indexed(but docValues="true"), "B:1" will be lost.
>   
>  * but if you write query like, q=A:1 OR A:2 OR B:1 OR B:2,
>  it will work perfect.
> the only difference of two queries is that they are wrote in different orders.
>  one is *ABAB*, another is *AABB.*
>  
> *steps of reproduce*
>  you can easily reproduce this problem on a solr collection with _default 
> configset and exampledocs/books.csv data.
>  # create a _default collection
> {code:java}
> bin/solr create -c books -s 2 -rf 2{code}
>  # post books.csv.
> {code:java}
> bin/post -c books example/exampledocs/books.csv{code}
>  # run followed query.
>  ** query1: 
> [http://localhost:8983/solr/books/select?q=+(name_str:Foundation+OR+cat:book+OR+name_str:Jhereg+OR+cat:cd)=query]
>  ** query2: 
> [http://localhost:8983/solr/books/select?q=+(name_str:Foundation+OR+name_str:Jhereg+OR+cat:book+OR+cat:cd)=query]
>  ** then you can find the parsedqueries are different.
>  *** query1.  ("name_str:Foundation" is lost.)
> {code:json}
>  "debug":{
>      "rawquerystring":"+(name_str:Foundation OR cat:book OR name_str:Jhereg 
> OR cat:cd)",
>      "querystring":"+(name_str:Foundation OR cat:book OR name_str:Jhereg OR 
> cat:cd)",
>      "parsedquery":"+(cat:book cat:cd (name_str:[[4a 68 65 72 65 67] TO [4a 
> 68 65 72 65 67]]))",
>      "parsedquery_toString":"+(cat:book cat:cd name_str:[[4a 68 65 72 65 67] 
> TO [4a 68 65 72 65 67]])",
>      "QParser":"LuceneQParser"}}{code}
>  *** query2.  ("name_str:Foundation" isn't lost.)
> {code:json}
>    "debug":{
>      "rawquerystring":"+(name_str:Foundation OR name_str:Jhereg OR cat:book 
> OR cat:cd)",
>      "querystring":"+(name_str:Foundation OR name_str:Jhereg OR cat:book OR 
> cat:cd)",
>      "parsedquery":"+(cat:book cat:cd ((name_str:[[46 6f 75 6e 64 61 74 69 6f 
> 6e] TO [46 6f 75 6e 64 61 74 69 6f 6e]]) (name_str:[[4a 68 65 72 65 67] TO 
> [4a 68 65 72 65 67]])))",
>      "parsedquery_toString":"+(cat:book cat:cd (name_str:[[46 6f 75 6e 64 61 
> 74 69 6f 6e] TO [46 6f 75 6e 64 61 74 69 6f 6e]] name_str:[[4a 68 65 72 65 
> 67] TO [4a 68 65 72 65 67]]))",
>      "QParser":"LuceneQParser"}{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14520) json.facets: allBucket:true can cause server errors when combined with refine:true

2020-06-01 Thread Chris M. Hostetter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris M. Hostetter updated SOLR-14520:
--
Attachment: SOLR-14520.patch
Status: Open  (was: Open)

Attaching an updated patch:
 * small correction to the "expected" results in TestJsonFacetRefinement
 * fix proposed by [~mgibney] in his "[^SOLR-14467_allBuckets_refine.patch]" 
attachment to SOLR-14467 (where this bug was first found).

I still need to review & think about michael's solution a bit more before 
commiting – my initial impression is that it's better then what we have now, 
but i want to think through if it introduces any diff/new bugs.

Michael: one concern i have that's still nagging me is how to reproduce the 2nd 
type of failure i mentioned above – i believe it's also intrinsicly "fixed" in 
your patch by the nature of changing out hte {{countAcc}} to something that 
will just flat out ignore the slot, but i'm still perplexed why my attempts at 
updating the tests to isolate it couldn't seem to trigger it – do you have any 
idea what a simple reproducible test case for that failure might look like?

> json.facets: allBucket:true can cause server errors when combined with 
> refine:true
> --
>
> Key: SOLR-14520
> URL: https://issues.apache.org/jira/browse/SOLR-14520
> Project: Solr
>  Issue Type: Bug
>  Components: Facet Module
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-14520.patch, SOLR-14520.patch
>
>
> Another bug that was discovered while testing SOLR-14467...
> In some situations, using {{allBuckets:true}} in conjunction with 
> {{refine:true}} can cause server errors during the "refinement" requests to 
> the individual shards -- either NullPointerExceptions from some (nested) 
> SlotAccs when SpecialSlotAcc tries to collect them, or 
> ArrayIndexOutOfBoundsException from CountSlotArrAcc.incrementCount because 
> it's asked to collect to "large" slot# values even though it's been 
> initialized with a size of '1'
> NOTE: these problems may be specific to FacetFieldProcessorByArrayDV - i have 
> not yet seen similar failures from FacetFieldProcessorByArrayUIF (those are 
> the only 2 used when doing refinement) but that may just be a fluke of 
> testing.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14525) For components loaded from packages SolrCoreAware, ResourceLoaderAware are not honored

2020-06-01 Thread Chris M. Hostetter (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121453#comment-17121453
 ] 

Chris M. Hostetter commented on SOLR-14525:
---

this change seems to be causing trivially reproducible failures in 
PackageManagerCLITest (regardless of seed) due to 
ConcurrentModificationException of an ArrayList ...

{noformat}
   [junit4]   2> 8061 ERROR (qtp1868832549-59) [n:127.0.0.1:45103_solr ] 
o.a.s.a.AnnotatedApi Error executing command 
   [junit4]   2>   => java.lang.reflect.InvocationTargetException
   [junit4]   2>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   [junit4]   2> java.lang.reflect.InvocationTargetException: null
   [junit4]   2>at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
   [junit4]   2>at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 ~[?:?]
   [junit4]   2>at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 ~[?:?]
   [junit4]   2>at java.lang.reflect.Method.invoke(Method.java:566) 
~[?:?]
   [junit4]   2>at 
org.apache.solr.api.AnnotatedApi$Cmd.invoke(AnnotatedApi.java:250) ~[java/:?]
   [junit4]   2>at 
org.apache.solr.api.AnnotatedApi.call(AnnotatedApi.java:179) ~[java/:?]
   [junit4]   2>at 
org.apache.solr.api.V2HttpCall.handleAdmin(V2HttpCall.java:339) ~[java/:?]
   [junit4]   2>at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:809) 
~[java/:?]
   [junit4]   2>at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:562) ~[java/:?]
   [junit4]   2>at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
 ~[java/:?]
   [junit4]   2>at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
 ~[java/:?]
   [junit4]   2>at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
 ~[jetty-servlet-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:166)
 ~[java/:?]
   [junit4]   2>at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1604)
 ~[jetty-servlet-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545) 
~[jetty-servlet-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
 ~[jetty-server-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
 ~[jetty-server-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
 ~[jetty-server-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
 ~[jetty-server-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
 ~[jetty-server-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485) 
~[jetty-servlet-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
 ~[jetty-server-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
 ~[jetty-server-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
 ~[jetty-server-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) 
~[jetty-server-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) 
~[jetty-server-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:322)
 ~[jetty-rewrite-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:717) 
~[jetty-server-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127) 
~[jetty-server-9.4.27.v20200227.jar:9.4.27.v20200227]
   [junit4]   2>at 

[jira] [Resolved] (SOLR-14529) solr 8.4.1 with ssl tls1.2 creating an issue with non-leader node

2020-06-01 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-14529.
---
Resolution: Information Provided

Please raise questions like this on the user's list, we try to reserve JIRAs 
for known bugs/enhancements rather than usage questions. This sounds like a 
configuration issue at first glance.

See: 
http://lucene.apache.org/solr/community.html#mailing-lists-irc there are links 
to both Lucene and Solr mailing lists there.

A _lot_ more people will see your question on that list and may be able to help 
more quickly.


If it's determined that this really is a code issue or enhancement to Lucene or 
Solr and not a configuration/usage problem, we can raise a new JIRA or reopen 
this one.



> solr 8.4.1 with ssl tls1.2 creating an issue with non-leader node
> -
>
> Key: SOLR-14529
> URL: https://issues.apache.org/jira/browse/SOLR-14529
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.4.1
>Reporter: Yaswanth
>Priority: Major
>  Labels: solr, solrcloud
>
> Trying to setup solr 8.4.1 + open jdk 11 on centos , we enabled the ssl 
> configurations with all the certs in place, but the issue what we are seeing 
> is when trying to hit /update api on non-leader solr node , its throwing an 
> error 
> metadata":[
>  
> "error-class","org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException",
>  
> "root-error-class","org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException"],
>  "msg":"Async exception during distributed update: 
> javax.crypto.BadPaddingException: RSA private key operation failed",
>  
> "trace":"org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException:
>  Async exception during distributed update: javax.crypto.BadPaddingException: 
> RSA private key operation failed\n\tat 
> org.apache.solr.update.processor.DistributedZkUpdateProcessor.doDistribFinish(DistributedZkUpdateProcessor.java:1189)\n\tat
>  
> org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1096)\n\tat
>  
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182)\n\tat
>  
> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)\n\tat
>  org.apache.solr.update.processor.UpdateRequestProcessor.finish
> *Strangely this is happening when we try to hit a non-leader node, hitting 
> leader node its working fine without any issue and getting the data indexed.*
> Not able to track down where the exact issue is happening.
> Thanks,



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14525) For components loaded from packages SolrCoreAware, ResourceLoaderAware are not honored

2020-06-01 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121422#comment-17121422
 ] 

ASF subversion and git services commented on SOLR-14525:


Commit e0b7984b140c4ecc9f435a22fd557fbcea30b171 in lucene-solr's branch 
refs/heads/branch_8x from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e0b7984 ]

SOLR-14525: SolrCoreAware, ResourceLoaderAware should be honored for plugin 
loaded from packages


> For components loaded from packages SolrCoreAware, ResourceLoaderAware are 
> not honored
> --
>
> Key: SOLR-14525
> URL: https://issues.apache.org/jira/browse/SOLR-14525
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: packages
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> inform() methods are not invoked if the plugins are loaded from packages



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul merged pull request #1547: SOLR-14525 For components loaded from packages SolrCoreAware, ResourceLoaderAware are not honored

2020-06-01 Thread GitBox


noblepaul merged pull request #1547:
URL: https://github.com/apache/lucene-solr/pull/1547


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14525) For components loaded from packages SolrCoreAware, ResourceLoaderAware are not honored

2020-06-01 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121412#comment-17121412
 ] 

ASF subversion and git services commented on SOLR-14525:


Commit e841d7625cc9cf495e611972b488390bcc8458ea in lucene-solr's branch 
refs/heads/master from Noble Paul
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e841d76 ]

SOLR-14525 For components loaded from packages SolrCoreAware, 
ResourceLoaderAware are not honored (#1547)



> For components loaded from packages SolrCoreAware, ResourceLoaderAware are 
> not honored
> --
>
> Key: SOLR-14525
> URL: https://issues.apache.org/jira/browse/SOLR-14525
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: packages
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> inform() methods are not invoked if the plugins are loaded from packages



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14527) The 8.5.1 release can't be verified using PGP

2020-06-01 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121393#comment-17121393
 ] 

Jan Høydahl commented on SOLR-14527:


The Solr Download page 
[https://lucene.apache.org/solr/downloads.html#verify-downloads] tells you to 
download the KEYS file from https://downloads.apache.org/lucene/KEYS - i.e. the 
top folder, not the solr sub folder, which was previously used.

I suppose we should update the README file on the archive download page and 
perhaps remove that KEYS file.

> The 8.5.1 release can't be verified using PGP
> -
>
> Key: SOLR-14527
> URL: https://issues.apache.org/jira/browse/SOLR-14527
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: website
>Affects Versions: 8.5.1
>Reporter: Per Cederqvist
>Priority: Major
>
> The [https://archive.apache.org/dist/lucene/solr/8.5.1/solr-8.5.1.tgz.asc] 
> signature of the 
> [https://archive.apache.org/dist/lucene/solr/8.5.1/solr-8.5.1.tgz] file is 
> made by the following key:
> pub rsa4096 2019-07-10 [SC]
>  E58A6F4D5B2B48AC66D5E53BD4F181881A42F9E6
> uid [ unknown] Ignacio Vera (CODE SIGNING KEY) 
> sub rsa4096 2019-07-10 [E]
>  
> However, that key is not included in 
> [https://archive.apache.org/dist/lucene/solr/KEYS,] so there is no way for me 
> to verify that the file is authentic.  I could download the key from a 
> keyserver, but there are no signatures on the key, so I'm left with no way to 
> verify that the 8.5.1 distribution is legitimate.
> I'm assuming this is just an omission, and that [~ivera] simply forgot to add 
> the key to the KEYS file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14529) solr 8.4.1 with ssl tls1.2 creating an issue with non-leader node

2020-06-01 Thread Yaswanth (Jira)
Yaswanth created SOLR-14529:
---

 Summary: solr 8.4.1 with ssl tls1.2 creating an issue with 
non-leader node
 Key: SOLR-14529
 URL: https://issues.apache.org/jira/browse/SOLR-14529
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 8.4.1
Reporter: Yaswanth


Trying to setup solr 8.4.1 + open jdk 11 on centos , we enabled the ssl 
configurations with all the certs in place, but the issue what we are seeing is 
when trying to hit /update api on non-leader solr node , its throwing an error 

metadata":[
 
"error-class","org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException",
 
"root-error-class","org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException"],
 "msg":"Async exception during distributed update: 
javax.crypto.BadPaddingException: RSA private key operation failed",
 
"trace":"org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException:
 Async exception during distributed update: javax.crypto.BadPaddingException: 
RSA private key operation failed\n\tat 
org.apache.solr.update.processor.DistributedZkUpdateProcessor.doDistribFinish(DistributedZkUpdateProcessor.java:1189)\n\tat
 
org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1096)\n\tat
 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182)\n\tat
 
org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)\n\tat
 org.apache.solr.update.processor.UpdateRequestProcessor.finish

*Strangely this is happening when we try to hit a non-leader node, hitting 
leader node its working fine without any issue and getting the data indexed.*

Not able to track down where the exact issue is happening.

Thanks,



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9382) Lucene's gradle version can't cope with Java 14

2020-06-01 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121364#comment-17121364
 ] 

David Smiley commented on LUCENE-9382:
--

My preference is not to burden Lucene source checking with checking it doesn't 
need, thus no log checks IMO.

> Lucene's gradle version can't cope with Java 14
> ---
>
> Key: LUCENE-9382
> URL: https://issues.apache.org/jira/browse/LUCENE-9382
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Alan Woodward
>Assignee: Dawid Weiss
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If you have JDK 14 installed as your default java, then attempting to use 
> gradle within the lucene-solr project can result in errors, particularly if 
> you have other projects that use more recent gradle versions on the same 
> machine.
> ```
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.codehaus.groovy.vmplugin.v7.Java7
> at 
> org.codehaus.groovy.vmplugin.VMPluginFactory.(VMPluginFactory.java:43)
> at 
> org.codehaus.groovy.reflection.GroovyClassValueFactory.(GroovyClassValueFactory.java:35)
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14419) Query DLS {"param":"ref"}

2020-06-01 Thread Mikhail Khludnev (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-14419:

Attachment: SOLR-14419-refguide.patch
Status: Patch Available  (was: Patch Available)

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419-refguide.patch, SOLR-14419.patch, 
> SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14528) Hybris 1905 and Solr 7.7.2 CPU Performance issue

2020-06-01 Thread Jira
Andrés Gutiérrez created SOLR-14528:
---

 Summary: Hybris 1905 and Solr 7.7.2 CPU Performance issue
 Key: SOLR-14528
 URL: https://issues.apache.org/jira/browse/SOLR-14528
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCLI
Affects Versions: 7.7.2
Reporter: Andrés Gutiérrez
 Attachments: SOLR TEST cliente pro SAP TUNNING - 12-05-2020.docx

We write you from CEMEX Mexico, we use your solution through the HYBRIS 
E-Commerce from SAP, we have been with it for 3 years and we never had 
performance problems with it.

 

But since the end of March of this year when we have migrated from version 6.3 
of Hybris to 1905, the one that brings with it also the change in version in 
solr from 6.1.0 to 7.7.2. We have found that when Hybris performs solr tasks 
like modifying an index or full index, the CPU usage climbs and saturates, 
causing the server to crash. 
 

 

This was reported to the SAP people, who made us change the following 
configuration parameters without achieving significant changes on it:


{color:#FF}(/etc/default/solr.in.sh){color}

{color:#FF}SOLR_JAVA_MEM="-Xms8g -Xmx8g -XX:ConcGCThreads=2 
-XX:ParallelGCThreads=2"
 {color}{color:#FF}GC_TUNE="-XX:+UseG1GC -XX:+UnlockExperimentalVMOptions 
-XX:G1MaxNewSizePercent=70 -XX:+PerfDisableSharedMem 
-XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=250 -XX:+UseLargePages 
-XX:+AlwaysPreTouch" {color}

{color:#FF}(solrconfig.xml){color}

{color:#FF}     {color}

{color:#FF}    
${solr.lock.type:native}{color}

{color:#FF}    {color}

{color:#FF}    2{color}

{color:#FF}    1{color}

{color:#FF}    {color}

{color:#FF} {color}

{color:#FF}    {color}

{color:#FF}    10{color}

{color:#FF}    20{color}

{color:#FF}    {color}

{color:#FF}    600{color}

{color:#FF}    {color}
 This configuration changes made the server crash less often but it also made 
the indexation times much longer with a sustained high CPU usage. It is 
important to restate that no changes have been performed on our code regarding 
how indexation processes run, and this used to work quite well in the older 
solr version (6.1). (Tests and performance metrics can be found in the attached 
document named:  _*SOLR TEST cliente pro SAP TUNNING - 12-05-2020.docx)*_
 

On the other hand, they tell us that they see a significant change in this 
class and I quote

 

"The methods that take most of the time are related to the 
Lucene70DocValuesConsumer class. You can find attached a PPT file with 
screenshots from Dynatrace and a stack trace from Solr.

 

I inspected the source code of the file 
(https://github.com/apache/lucene-solr/blob/branch_7_7/lucene/core/src/java/org/apache/lucene/codecs/lucene70/Lucene70DocValuesConsumer.java)

to see if it used any flags or configuration parameters that could be 
configured / tuned but that is not the case.

 

This part of the Solr code is very different from the old one (Solr 6.1). I did 
not have enough time to trace all the method calls to reach a conclusion, but 
it is definitively doing

things differently."

 

And they ask us to raise a ticket with you to see if they can help us see that 
it could have changed so much that it brings us the consumption problems 
mentioned above.

 

As it is the first time that we report a problem directly to you, we would like 
you to guide us in what we can pass on to you or how to take this problem to a 
prompt solution.

 

We remain at your entire disposal (and immediately) for what you need for your 
analysis.

 

Regards.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-7323) Core Admin API looks for config sets in wrong directory

2020-06-01 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-7323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121327#comment-17121327
 ] 

Jan Høydahl commented on SOLR-7323:
---

{{configSetBaseDir}} is configurable in solr.xml and defaults to 
{{$SOLR_HOME/configsets}}.

Standalone Solr needs this as R/W since we allow editing configsets on disk.

But Cloud never touches this folder except for uploading _default on init.

Perhpaps static R/O configsets need a new stable location in the distribution, 
and then re-design the way configsets are initialized as part of an extended 
[SIP-1|https://cwiki.apache.org/confluence/x/HIxSC]? I mean, the whole 
configset story is so confusing already that it needs some TLC :) 

> Core Admin API looks for config sets in wrong directory
> ---
>
> Key: SOLR-7323
> URL: https://issues.apache.org/jira/browse/SOLR-7323
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.0
>Reporter: Mark Haase
>Assignee: David Smiley
>Priority: Major
>
> *To Reproduce*
> Try to create a core using Core Admin API and a config set:
> {code}
> curl 
> 'http://localhost:8983/solr/admin/cores?action=CREATE=new_core=basic_configs'
> {code}
> *Expected Outcome*
> Core is created in `/var/solr/data/new_core` using one of the config sets 
> installed by the installer script in 
> `/opt/solr/server/solr/configsets/basic_configs`.
> *Actual Outcome*
> {code}
> 
> 
> 400 name="QTime">9Error CREATEing 
> SolrCore 'new_core': Unable to create core [new_core] Caused by: Could not 
> load configuration from directory 
> /var/solr/data/configsets/basic_configs400
> 
> {code}
> Why is it looking for config sets in /var/solr/data? I don't know. If that's 
> where configsets are supposed to be placed, then why does the installer put 
> them somewhere else?
> There's no documented API to tell it to look for config sets anywhere else, 
> either. It will always search inside /var/solr/data.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14467) inconsistent server errors combining relatedness() with allBuckets:true

2020-06-01 Thread Michael Gibney (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121301#comment-17121301
 ] 

Michael Gibney commented on SOLR-14467:
---

Yes, this all sounds good to me. I wasn't sure what was going on with the null 
vs absent stats, so I'm glad there's a logical explanation (and that that's 
fixed now!).

It took me a while to dig into the 2 beast logs, but I think the issue has to 
do with {{FacetFieldProcessorByArray}} and refinement requests where 
{{allBuckets:true}}. The attached patch ([^SOLR-14467_allBuckets_refine.patch]) 
I think fixes the problem by intercepting (and ignoring) normal calls to 
{{countAcc.collect(...)}} in such situations, and by setting {{otherAccs=accs}} 
to allow {{setNextReader(...)}} to be called on accs during what is essentially 
"single-pass" collection (with {{allBuckets}} bucket being ultimately the only 
collect target).

> inconsistent server errors combining relatedness() with allBuckets:true
> ---
>
> Key: SOLR-14467
> URL: https://issues.apache.org/jira/browse/SOLR-14467
> Project: Solr
>  Issue Type: Bug
>  Components: Facet Module
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-14467.patch, SOLR-14467.patch, SOLR-14467.patch, 
> SOLR-14467_allBuckets_refine.patch, SOLR-14467_test.patch, 
> SOLR-14467_test.patch, beast.log.txt, beast2.log.txt
>
>
> While working on randomized testing for SOLR-13132 i discovered a variety of 
> different ways that JSON Faceting's "allBuckets" option can fail when 
> combined with the "relatedness()" function.
> I haven't found a trivial way to manual reproduce this, but i have been able 
> to trigger the failures with a trivial patch to {{TestCloudJSONFacetSKG}} 
> which i will attach.
> Based on the nature of the failures it looks like it may have something to do 
> with multiple segments of different sizes, and or resizing the SlotAccs ?
> The relatedness() function doesn't have much (any?) existing tests in place 
> that leverage "allBuckets" so this is probably a bug that has always existed 
> -- it's possible it may be excessively cumbersome to fix and we might 
> nee/wnat to just document that incompatibility and add some code to try and 
> detect if the user combines these options and if so fail with a 400 error?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14467) inconsistent server errors combining relatedness() with allBuckets:true

2020-06-01 Thread Michael Gibney (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Gibney updated SOLR-14467:
--
Attachment: SOLR-14467_allBuckets_refine.patch

> inconsistent server errors combining relatedness() with allBuckets:true
> ---
>
> Key: SOLR-14467
> URL: https://issues.apache.org/jira/browse/SOLR-14467
> Project: Solr
>  Issue Type: Bug
>  Components: Facet Module
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-14467.patch, SOLR-14467.patch, SOLR-14467.patch, 
> SOLR-14467_allBuckets_refine.patch, SOLR-14467_test.patch, 
> SOLR-14467_test.patch, beast.log.txt, beast2.log.txt
>
>
> While working on randomized testing for SOLR-13132 i discovered a variety of 
> different ways that JSON Faceting's "allBuckets" option can fail when 
> combined with the "relatedness()" function.
> I haven't found a trivial way to manual reproduce this, but i have been able 
> to trigger the failures with a trivial patch to {{TestCloudJSONFacetSKG}} 
> which i will attach.
> Based on the nature of the failures it looks like it may have something to do 
> with multiple segments of different sizes, and or resizing the SlotAccs ?
> The relatedness() function doesn't have much (any?) existing tests in place 
> that leverage "allBuckets" so this is probably a bug that has always existed 
> -- it's possible it may be excessively cumbersome to fix and we might 
> nee/wnat to just document that incompatibility and add some code to try and 
> detect if the user combines these options and if so fail with a 400 error?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9387) Remove RAM accounting from LeafReader

2020-06-01 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121265#comment-17121265
 ] 

Andrzej Bialecki commented on LUCENE-9387:
--

In Solr usage the RAM consumption by LeafReaders is usually insignificant 
compared to all other in-memory data so even a crude approximation would be ok. 
However, I imagine that for pure Lucene-based apps on a tight memory budget it 
may indeed be an important factor.

> Remove RAM accounting from LeafReader
> -
>
> Key: LUCENE-9387
> URL: https://issues.apache.org/jira/browse/LUCENE-9387
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> Context for this issue can be found at 
> https://lists.apache.org/thread.html/r06b6a63d8689778bbc2736ec7e4e39bf89ae6973c19f2ec6247690fd%40%3Cdev.lucene.apache.org%3E.
> RAM accounting made sense when readers used lots of memory. E.g. when norms 
> were on heap, we could return memory usage of the norms array and memory 
> estimates would be very close to actual memory usage.
> However nowadays, readers consume very little memory, so RAM accounting has 
> become less valuable. Furthermore providing good estimates has become 
> incredibly complex as we can no longer focus on a couple main contributors to 
> memory usage, but would need to start considering things that we historically 
> ignored, such as field infos, segment infos, NIOFS buffers, etc.
> Let's remove RAM accounting from LeafReader?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #1548: SOLR-14524: Harden MultiThreadedOCPTest testFillWorkQueue()

2020-06-01 Thread GitBox


murblanc commented on a change in pull request #1548:
URL: https://github.com/apache/lucene-solr/pull/1548#discussion_r433397191



##
File path: solr/core/src/test/org/apache/solr/cloud/MultiThreadedOCPTest.java
##
@@ -79,40 +78,57 @@ private void testFillWorkQueue() throws Exception {
 QUEUE_OPERATION, MOCK_COLL_TASK.toLower(),
 ASYNC, String.valueOf(i),
 
-"sleep", (i == 0 ? "1000" : "1") //first task waits for 1 second, 
and thus blocking
-// all other tasks. Subsequent tasks only wait for 1ms
+// third task waits for a long time, and thus blocks the queue for 
all other tasks for A_COLL.
+// Subsequent tasks as well as the first two only wait for 1ms
+"sleep", (i == 2 ? "1" : "1")
 )));
 log.info("MOCK task added {}", i);
+  }
 
+  // Wait until we see the first two A_COLL tasks getting processed
+  boolean acoll0done = false, acoll1done = false;

Review comment:
   Extracted two methods and simplified flow.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #1548: SOLR-14524: Harden MultiThreadedOCPTest testFillWorkQueue()

2020-06-01 Thread GitBox


murblanc commented on a change in pull request #1548:
URL: https://github.com/apache/lucene-solr/pull/1548#discussion_r433396724



##
File path: solr/core/src/test/org/apache/solr/cloud/MultiThreadedOCPTest.java
##
@@ -79,40 +78,57 @@ private void testFillWorkQueue() throws Exception {
 QUEUE_OPERATION, MOCK_COLL_TASK.toLower(),
 ASYNC, String.valueOf(i),
 
-"sleep", (i == 0 ? "1000" : "1") //first task waits for 1 second, 
and thus blocking
-// all other tasks. Subsequent tasks only wait for 1ms
+// third task waits for a long time, and thus blocks the queue for 
all other tasks for A_COLL.
+// Subsequent tasks as well as the first two only wait for 1ms
+"sleep", (i == 2 ? "1" : "1")
 )));
 log.info("MOCK task added {}", i);
+  }
 
+  // Wait until we see the first two A_COLL tasks getting processed
+  boolean acoll0done = false, acoll1done = false;
+  for (int i = 0; i < 500; i++) {
+if (!acoll0done) {
+  acoll0done = null != getStatusResponse("0", 
client).getResponse().get("MOCK_FINISHED");
+}
+if (!acoll1done) {

Review comment:
   Removed check for task 0, left only task 1





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #1548: SOLR-14524: Harden MultiThreadedOCPTest testFillWorkQueue()

2020-06-01 Thread GitBox


murblanc commented on a change in pull request #1548:
URL: https://github.com/apache/lucene-solr/pull/1548#discussion_r433396833



##
File path: solr/core/src/test/org/apache/solr/cloud/MultiThreadedOCPTest.java
##
@@ -79,40 +78,57 @@ private void testFillWorkQueue() throws Exception {
 QUEUE_OPERATION, MOCK_COLL_TASK.toLower(),
 ASYNC, String.valueOf(i),
 
-"sleep", (i == 0 ? "1000" : "1") //first task waits for 1 second, 
and thus blocking
-// all other tasks. Subsequent tasks only wait for 1ms
+// third task waits for a long time, and thus blocks the queue for 
all other tasks for A_COLL.
+// Subsequent tasks as well as the first two only wait for 1ms
+"sleep", (i == 2 ? "1" : "1")
 )));
 log.info("MOCK task added {}", i);
+  }
 
+  // Wait until we see the first two A_COLL tasks getting processed
+  boolean acoll0done = false, acoll1done = false;
+  for (int i = 0; i < 500; i++) {
+if (!acoll0done) {
+  acoll0done = null != getStatusResponse("0", 
client).getResponse().get("MOCK_FINISHED");
+}
+if (!acoll1done) {
+  acoll1done = null != getStatusResponse("1", 
client).getResponse().get("MOCK_FINISHED");
+}
+if (acoll0done && acoll1done) break;
+Thread.sleep(100);
   }
-  Thread.sleep(100);//wait and post the next message
+  assertTrue("Queue did not process first two tasks on A_COLL, can't run 
test", acoll0done && acoll1done);
+
+  // Make sure the long running task did not finish, otherwise no way the 
B_COLL task can be tested to run in parallel with it
+  assertNull("Long running task finished too early, can't test", 
getStatusResponse("2", client).getResponse().get("MOCK_FINISHED"));
 
-  //this is not going to be blocked because it operates on another 
collection
+  // Enqueue a task on another collection not competing with the lock on 
A_COLL and see that it can be executed right away
   distributedQueue.offer(Utils.toJSON(Utils.makeMap(
   "collection", "B_COLL",
   QUEUE_OPERATION, MOCK_COLL_TASK.toLower(),
   ASYNC, "200",
   "sleep", "1"
   )));
 
-
-  Long acoll = null, bcoll = null;
+  // We now check that either the B_COLL task has completed before the 
third (long running) task on A_COLL,
+  // Or if both have completed (if this check got significantly delayed 
for some reason), we verify B_COLL was first.
+  Long acoll3 = null, bcoll = null;

Review comment:
   Thanks. Fixed.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #1548: SOLR-14524: Harden MultiThreadedOCPTest testFillWorkQueue()

2020-06-01 Thread GitBox


murblanc commented on a change in pull request #1548:
URL: https://github.com/apache/lucene-solr/pull/1548#discussion_r433396537



##
File path: solr/core/src/test/org/apache/solr/cloud/MultiThreadedOCPTest.java
##
@@ -79,40 +78,57 @@ private void testFillWorkQueue() throws Exception {
 QUEUE_OPERATION, MOCK_COLL_TASK.toLower(),
 ASYNC, String.valueOf(i),
 
-"sleep", (i == 0 ? "1000" : "1") //first task waits for 1 second, 
and thus blocking
-// all other tasks. Subsequent tasks only wait for 1ms
+// third task waits for a long time, and thus blocks the queue for 
all other tasks for A_COLL.
+// Subsequent tasks as well as the first two only wait for 1ms
+"sleep", (i == 2 ? "1" : "1")
 )));
 log.info("MOCK task added {}", i);
+  }
 
+  // Wait until we see the first two A_COLL tasks getting processed
+  boolean acoll0done = false, acoll1done = false;
+  for (int i = 0; i < 500; i++) {
+if (!acoll0done) {
+  acoll0done = null != getStatusResponse("0", 
client).getResponse().get("MOCK_FINISHED");
+}
+if (!acoll1done) {
+  acoll1done = null != getStatusResponse("1", 
client).getResponse().get("MOCK_FINISHED");
+}
+if (acoll0done && acoll1done) break;
+Thread.sleep(100);
   }
-  Thread.sleep(100);//wait and post the next message
+  assertTrue("Queue did not process first two tasks on A_COLL, can't run 
test", acoll0done && acoll1done);
+
+  // Make sure the long running task did not finish, otherwise no way the 
B_COLL task can be tested to run in parallel with it
+  assertNull("Long running task finished too early, can't test", 
getStatusResponse("2", client).getResponse().get("MOCK_FINISHED"));
 
-  //this is not going to be blocked because it operates on another 
collection
+  // Enqueue a task on another collection not competing with the lock on 
A_COLL and see that it can be executed right away
   distributedQueue.offer(Utils.toJSON(Utils.makeMap(
   "collection", "B_COLL",
   QUEUE_OPERATION, MOCK_COLL_TASK.toLower(),
   ASYNC, "200",
   "sleep", "1"
   )));
 
-
-  Long acoll = null, bcoll = null;
+  // We now check that either the B_COLL task has completed before the 
third (long running) task on A_COLL,
+  // Or if both have completed (if this check got significantly delayed 
for some reason), we verify B_COLL was first.
+  Long acoll3 = null, bcoll = null;
   for (int i = 0; i < 500; i++) {
-if (bcoll == null) {
-  CollectionAdminResponse statusResponse = getStatusResponse("200", 
client);
-  bcoll = (Long) statusResponse.getResponse().get("MOCK_FINISHED");
-}
-if (acoll == null) {
-  CollectionAdminResponse statusResponse = getStatusResponse("2", 
client);
-  acoll = (Long) statusResponse.getResponse().get("MOCK_FINISHED");
+if (acoll3 == null) {

Review comment:
   moved





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on pull request #1548: SOLR-14524: Harden MultiThreadedOCPTest testFillWorkQueue()

2020-06-01 Thread GitBox


murblanc commented on pull request #1548:
URL: https://github.com/apache/lucene-solr/pull/1548#issuecomment-636968293


   > While we're here, can we split the `test()` method into 5 proper tests 
instead of bunching them all together?
   
   It's been that way since day 1 of that file (2014?) I assume for speed 
reasons. I tested both variants (IntelliJ and ant), the separate tests seem 
slower (45 seconds to 1m15s in IntelliJ, 2 minutes to 2m30s or so in ant). I 
see 16 instances of Solr being started when tests are run separately and only 4 
when a single test method does everything.
   
   I have no preference.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob commented on a change in pull request #1528: SOLR-12823: remove /clusterstate.json

2020-06-01 Thread GitBox


madrob commented on a change in pull request #1528:
URL: https://github.com/apache/lucene-solr/pull/1528#discussion_r430596540



##
File path: solr/core/src/java/org/apache/solr/cloud/ZkController.java
##
@@ -491,6 +493,40 @@ public boolean isClosed() {
 assert ObjectReleaseTracker.track(this);
   }
 
+  /**
+   * Verifies if /clusterstate.json exists in Zookeepeer, and if it does 
and is not empty, refuses to start and outputs
+   * a helpful message regarding collection migration.
+   *
+   * If /clusterstate.json exists and is empty, it is removed.
+   */
+  private void checkNoOldClusterstate(final SolrZkClient zkClient) throws 
InterruptedException {
+try {
+  if (!zkClient.exists(ZkStateReader.UNSUPPORTED_CLUSTER_STATE, true)) {
+return;
+  }
+
+  final byte[] data = 
zkClient.getData(ZkStateReader.UNSUPPORTED_CLUSTER_STATE, null, null, true);
+
+  if (data.length < 5) {
+// less than 5 chars is empty (it's likely just "{}"). This log will 
only occur once.
+log.warn("{} no longer supported starting with Solr 9. Found empty 
file on Zookeeper, deleting it.", ZkStateReader.UNSUPPORTED_CLUSTER_STATE);
+zkClient.delete(ZkStateReader.UNSUPPORTED_CLUSTER_STATE, -1, true);
+  } else {
+// /clusterstate.json not empty: refuse to start but do not 
automatically delete. A bit of a pain but user shouldn't
+// have older collections at this stage anyway.
+String message = ZkStateReader.UNSUPPORTED_CLUSTER_STATE + " no longer 
supported starting with Solr 9. "
++ "It is present and not empty. Cannot start Solr. Please first 
migrate collections to stateFormat=2 using an "
++ "older version of Solr or if you don't care about the data then 
delete the file from "
++ "Zookeeper using a command line tool, for example: bin/solr zk 
rm /clusterstate.json -z host:port";
+log.error(message);
+throw new SolrException(SolrException.ErrorCode.INVALID_STATE, 
message);
+  }
+} catch (KeeperException e) {
+  log.error("", e);

Review comment:
   Makes sense. If we throw anything here it will still propagate up to 
SolrDispatchFilter where it will be logged, so we don't need to double up on 
that here. We should probably clean that up in the other places where it 
happens, but that's out of scope for this PR.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob commented on a change in pull request #1548: SOLR-14524: Harden MultiThreadedOCPTest testFillWorkQueue()

2020-06-01 Thread GitBox


madrob commented on a change in pull request #1548:
URL: https://github.com/apache/lucene-solr/pull/1548#discussion_r433292874



##
File path: solr/core/src/test/org/apache/solr/cloud/MultiThreadedOCPTest.java
##
@@ -79,40 +78,57 @@ private void testFillWorkQueue() throws Exception {
 QUEUE_OPERATION, MOCK_COLL_TASK.toLower(),
 ASYNC, String.valueOf(i),
 
-"sleep", (i == 0 ? "1000" : "1") //first task waits for 1 second, 
and thus blocking
-// all other tasks. Subsequent tasks only wait for 1ms
+// third task waits for a long time, and thus blocks the queue for 
all other tasks for A_COLL.
+// Subsequent tasks as well as the first two only wait for 1ms
+"sleep", (i == 2 ? "1" : "1")
 )));
 log.info("MOCK task added {}", i);
+  }
 
+  // Wait until we see the first two A_COLL tasks getting processed
+  boolean acoll0done = false, acoll1done = false;
+  for (int i = 0; i < 500; i++) {
+if (!acoll0done) {
+  acoll0done = null != getStatusResponse("0", 
client).getResponse().get("MOCK_FINISHED");
+}
+if (!acoll1done) {

Review comment:
   Assuming the queue works as advertised, can we skip checking a0 and only 
check a1?

##
File path: solr/core/src/test/org/apache/solr/cloud/MultiThreadedOCPTest.java
##
@@ -79,40 +78,57 @@ private void testFillWorkQueue() throws Exception {
 QUEUE_OPERATION, MOCK_COLL_TASK.toLower(),
 ASYNC, String.valueOf(i),
 
-"sleep", (i == 0 ? "1000" : "1") //first task waits for 1 second, 
and thus blocking
-// all other tasks. Subsequent tasks only wait for 1ms
+// third task waits for a long time, and thus blocks the queue for 
all other tasks for A_COLL.
+// Subsequent tasks as well as the first two only wait for 1ms
+"sleep", (i == 2 ? "1" : "1")
 )));
 log.info("MOCK task added {}", i);
+  }
 
+  // Wait until we see the first two A_COLL tasks getting processed
+  boolean acoll0done = false, acoll1done = false;

Review comment:
   I found this section a little bit hard to read, would prefer something 
more verbose but potentially easier to understand at a glance.

##
File path: solr/core/src/test/org/apache/solr/cloud/MultiThreadedOCPTest.java
##
@@ -79,40 +78,57 @@ private void testFillWorkQueue() throws Exception {
 QUEUE_OPERATION, MOCK_COLL_TASK.toLower(),
 ASYNC, String.valueOf(i),
 
-"sleep", (i == 0 ? "1000" : "1") //first task waits for 1 second, 
and thus blocking
-// all other tasks. Subsequent tasks only wait for 1ms
+// third task waits for a long time, and thus blocks the queue for 
all other tasks for A_COLL.
+// Subsequent tasks as well as the first two only wait for 1ms
+"sleep", (i == 2 ? "1" : "1")
 )));
 log.info("MOCK task added {}", i);
+  }
 
+  // Wait until we see the first two A_COLL tasks getting processed
+  boolean acoll0done = false, acoll1done = false;
+  for (int i = 0; i < 500; i++) {
+if (!acoll0done) {
+  acoll0done = null != getStatusResponse("0", 
client).getResponse().get("MOCK_FINISHED");

Review comment:
   I think one way to make this cleaner is to extract 
`getStatusResponse().getResponse.get("MOCK_FINISHED")` out into a separate 
method.

##
File path: solr/core/src/test/org/apache/solr/cloud/MultiThreadedOCPTest.java
##
@@ -79,40 +78,57 @@ private void testFillWorkQueue() throws Exception {
 QUEUE_OPERATION, MOCK_COLL_TASK.toLower(),
 ASYNC, String.valueOf(i),
 
-"sleep", (i == 0 ? "1000" : "1") //first task waits for 1 second, 
and thus blocking
-// all other tasks. Subsequent tasks only wait for 1ms
+// third task waits for a long time, and thus blocks the queue for 
all other tasks for A_COLL.
+// Subsequent tasks as well as the first two only wait for 1ms
+"sleep", (i == 2 ? "1" : "1")
 )));
 log.info("MOCK task added {}", i);
+  }
 
+  // Wait until we see the first two A_COLL tasks getting processed
+  boolean acoll0done = false, acoll1done = false;
+  for (int i = 0; i < 500; i++) {
+if (!acoll0done) {
+  acoll0done = null != getStatusResponse("0", 
client).getResponse().get("MOCK_FINISHED");
+}
+if (!acoll1done) {
+  acoll1done = null != getStatusResponse("1", 
client).getResponse().get("MOCK_FINISHED");
+}
+if (acoll0done && acoll1done) break;
+Thread.sleep(100);
   }
-  Thread.sleep(100);//wait and post the next message
+  assertTrue("Queue did not process first two tasks on A_COLL, can't run 
test", acoll0done && acoll1done);
+
+  // Make sure the long 

[GitHub] [lucene-solr] murblanc commented on pull request #1504: SOLR-14462: cache more than one autoscaling session

2020-06-01 Thread GitBox


murblanc commented on pull request #1504:
URL: https://github.com/apache/lucene-solr/pull/1504#issuecomment-636907104


   > @murblanc if you are done with the changes you planned to do , I shall do 
a review and merge this soon
   
   I am done @noblepaul, so please go ahead.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9382) Lucene's gradle version can't cope with Java 14

2020-06-01 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121063#comment-17121063
 ] 

Erick Erickson commented on LUCENE-9382:


[~dweiss]  Your changes look good to me. I haven't a clue why switches would 
barf. Am I correct that this change makes the script work and there's nothing 
for me to do except say thanks?

There are ~40 un-commented logging calls in Lucene, all in lucene/luke, as well 
as some commented-out logging calls elsewhere. I dithered over whether to 
include Lucene or not. On the one hand the logging calls in the Luke module are 
entirely irrelevant to the intent of looking at inefficiencies in the use of 
logging messages, Luke doesn't count in that respect. OTOH, it adds maybe 2-3 
seconds to the check.

I'd have no objection to restricting the check to Solr. David Smiley also had 
the same question FWIW.

What probably makes the most sense for Lucene is to either  would be a 
different check, one that just failed on _any_ logging call except in Luke. Or 
just not checking at all given the dearth of real calls.

> Lucene's gradle version can't cope with Java 14
> ---
>
> Key: LUCENE-9382
> URL: https://issues.apache.org/jira/browse/LUCENE-9382
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Alan Woodward
>Assignee: Dawid Weiss
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If you have JDK 14 installed as your default java, then attempting to use 
> gradle within the lucene-solr project can result in errors, particularly if 
> you have other projects that use more recent gradle versions on the same 
> machine.
> ```
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.codehaus.groovy.vmplugin.v7.Java7
> at 
> org.codehaus.groovy.vmplugin.VMPluginFactory.(VMPluginFactory.java:43)
> at 
> org.codehaus.groovy.reflection.GroovyClassValueFactory.(GroovyClassValueFactory.java:35)
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob commented on a change in pull request #1550: LUCENE-9383: benchmark module: Gradle conversion (complete)

2020-06-01 Thread GitBox


madrob commented on a change in pull request #1550:
URL: https://github.com/apache/lucene-solr/pull/1550#discussion_r433278450



##
File path: lucene/benchmark/build.gradle
##
@@ -15,27 +15,138 @@
  * limitations under the License.
  */
 
-
-apply plugin: 'java-library'
+apply plugin: 'java'
+// NOT a 'java-library'.  Maybe 'application' but seems too limiting.
 
 description = 'System for benchmarking Lucene'
 
 dependencies {  
-  api project(':lucene:core')
-
-  implementation project(':lucene:analysis:common')
-  implementation project(':lucene:facet')
-  implementation project(':lucene:highlighter')
-  implementation project(':lucene:queries')
-  implementation project(':lucene:spatial-extras')
-  implementation project(':lucene:queryparser')
-
-  implementation "org.apache.commons:commons-compress"
-  implementation "com.ibm.icu:icu4j"
-  implementation "org.locationtech.spatial4j:spatial4j"
-  implementation("net.sourceforge.nekohtml:nekohtml", {
+  compile project(':lucene:core')
+
+  compile project(':lucene:analysis:common')
+  compile project(':lucene:facet')
+  compile project(':lucene:highlighter')
+  compile project(':lucene:queries')
+  compile project(':lucene:spatial-extras')
+  compile project(':lucene:queryparser')
+
+  compile "org.apache.commons:commons-compress"
+  compile "com.ibm.icu:icu4j"
+  compile "org.locationtech.spatial4j:spatial4j"
+  compile("net.sourceforge.nekohtml:nekohtml", {
 exclude module: "xml-apis"
   })
 
-  testImplementation project(':lucene:test-framework')
+  runtime project(':lucene:analysis:icu')
+
+  testCompile project(':lucene:test-framework')
+}
+
+ext {
+  tempDir = file("temp")
+  workDir = file("work")
+}
+
+task run(type: JavaExec) {
+  description "Run a perf test (optional: -PtaskAlg=conf/your-algorithm-file 
-PmaxHeapSize=1G)"
+  main 'org.apache.lucene.benchmark.byTask.Benchmark'
+  classpath sourceSets.main.runtimeClasspath
+  // allow these to be specified on the CLI via -PtaskAlg=  for example
+  def taskAlg = project.properties['taskAlg'] ?: 'conf/micro-standard.alg'
+  args = [taskAlg]
+
+  maxHeapSize = project.properties['maxHeapSize'] ?: '1G'
+
+  String stdOutStr = project.properties['standardOutput']
+  if (stdOutStr != null) {
+standardOutput = new File(stdOutStr).newOutputStream()
+  }
+
+  debugOptions {
+enabled = false
+port = 5005
+suspend = true
+  }
+}
+
+/* Old "collation" Ant target:
+gradle getTop100kWikiWordFiles run -PtaskAlg=conf/collation.alg 
-PstandardOutput=work/collation.benchmark.output.txt
+perl -CSD scripts/collation.bm2jira.pl work/collation.benchmark.output.txt
+ */
+
+/* Old "shingle" Ant target:
+gradle reuters run -PtaskAlg=conf/shingle.alg 
-PstandardOutput=work/shingle.benchmark.output.txt
+perl -CSD scripts/shingle.bm2jira.pl work/shingle.benchmark.output.txt
+ */
+
+// The remaining tasks just get / extract / prepare data
+
+task getEnWiki(type: Download) {
+  src 
"https://home.apache.org/~dsmiley/data/enwiki-20070527-pages-articles.xml.bz2;
+  dest file("$tempDir/${src.file.split('/').last()}")
+  overwrite false
+  compress false
+
+  doLast {
+ant.bunzip2(src: dest, dest: tempDir) // will chop off .bz2
+  }
+}
+
+task getGeoNames(type: Download) {
+  // note: latest data is at: 
https://download.geonames.org/export/dump/allCountries.zip
+  //   and then randomize with: gsort -R -S 1500M file.txt > 
file_random.txt
+  //   and then compress with: bzip2 -9 -k file_random.txt
+  src 
"https://home.apache.org/~dsmiley/data/geonames_20130921_randomOrder_allCountries.txt.bz2;
+  dest file("$tempDir/${src.file.split('/').last()}")
+  overwrite false
+  compress false
+
+  doLast {
+ant.bunzip2(src: dest, dest: tempDir) // will chop off .bz2
+  }
+}
+
+task getReuters(type: Download) {
+  // note: there is no HTTPS url and we don't care because this is merely 
test/perf data
+  src 
"http://www.daviddlewis.com/resources/testcollections/reuters21578/reuters21578.tar.gz;
+  dest file("$tempDir/${src.file.split('/').last()}")
+  overwrite false
+  compress false
+}
+task extractReuters(type: Copy) {
+  dependsOn getReuters
+  from(tarTree(getReuters.dest)) { // can expand a .gz on the fly
+exclude '*.txt'
+  }
+  into file("$workDir/reuters")
+}
+task reuters(type: JavaExec) {
+  dependsOn extractReuters
+  def input = extractReuters.outputs.files[0]
+  def output = "$workDir/reuters-out"
+  inputs.dir(input)
+  outputs.dir(output)
+  main = 'org.apache.lucene.benchmark.utils.ExtractReuters'
+  classpath = sourceSets.main.runtimeClasspath
+  jvmArgs = ['-Xmx1G']

Review comment:
   Use `maxHeapSize`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [lucene-solr] dsmiley opened a new pull request #1550: LUCENE-9383: benchmark module: Gradle conversion (complete)

2020-06-01 Thread GitBox


dsmiley opened a new pull request #1550:
URL: https://github.com/apache/lucene-solr/pull/1550


   I switched from "java-library" type of Gradle plugin/module to more plainly 
"java" because this module isn't just some library, it's closer to an app.  I 
tried type "application" but I didn't have the same control that the "JavaExec" 
task gives you.  One consequence of not using "java-library" is that the names 
of the categories of dependencies are different, and so this appears 
odd/unusual relative to the other modules.
   
   I did not convert "collation" and "shingle" Ant targets, but I put there the 
two-line CLI equivalents for both in the form of a comment.  I ram them and 
they worked... albeit a confusion in one of the perl scripts that thought 
"darwin" OS was ==~ Windows simply because it contained "win" :-). 
   
   Notice the style of "getEnWiki" and "getGeoNames" and 
"getTop100kWikiWordFiles":  One task that does all it needs to do by adding a 
final step in doLast.  Now notice a different style: "reuters" (depending on 
extractReuters depending on getReuters).  This is more verbose, but admittedly 
for this case it has to do more.  I'm not well versed enough in Gradle to know 
which style is preferable.  I lean towards short & concise.  The current state 
is a nocommit IMO; need to harmonize the approaches.
   
   I did not convert 
https://www-2.cs.cmu.edu/afs/cs.cmu.edu/project/theo-20/www/data/news20.tar.gz 
(aka news20) or 
https://people.csail.mit.edu/u/j/jrennie/public_html/20Newsgroups/20news-18828.tar.gz
 or https://kdd.ics.uci.edu/databases/20newsgroups/mini_newsgroups.tar.gz (aka 
mini-news) because I could not find .alg files that used them.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14527) The 8.5.1 release can't be verified using PGP

2020-06-01 Thread Per Cederqvist (Jira)
Per Cederqvist created SOLR-14527:
-

 Summary: The 8.5.1 release can't be verified using PGP
 Key: SOLR-14527
 URL: https://issues.apache.org/jira/browse/SOLR-14527
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: website
Affects Versions: 8.5.1
Reporter: Per Cederqvist


The [https://archive.apache.org/dist/lucene/solr/8.5.1/solr-8.5.1.tgz.asc] 
signature of the 
[https://archive.apache.org/dist/lucene/solr/8.5.1/solr-8.5.1.tgz] file is made 
by the following key:

pub rsa4096 2019-07-10 [SC]
 E58A6F4D5B2B48AC66D5E53BD4F181881A42F9E6
uid [ unknown] Ignacio Vera (CODE SIGNING KEY) 
sub rsa4096 2019-07-10 [E]

 

However, that key is not included in 
[https://archive.apache.org/dist/lucene/solr/KEYS,] so there is no way for me 
to verify that the file is authentic.  I could download the key from a 
keyserver, but there are no signatures on the key, so I'm left with no way to 
verify that the 8.5.1 distribution is legitimate.

I'm assuming this is just an omission, and that [~ivera] simply forgot to add 
the key to the KEYS file.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul commented on a change in pull request #1547: SOLR-14525 For components loaded from packages SolrCoreAware, ResourceLoaderAware are not honored

2020-06-01 Thread GitBox


noblepaul commented on a change in pull request #1547:
URL: https://github.com/apache/lucene-solr/pull/1547#discussion_r433146984



##
File path: solr/core/src/java/org/apache/solr/pkg/PackageLoader.java
##
@@ -17,31 +17,25 @@
 
 package org.apache.solr.pkg;
 
-import java.io.Closeable;
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.nio.file.Path;
-import java.nio.file.Paths;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.HashMap;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
-import java.util.Objects;
-import java.util.Set;
-import java.util.concurrent.ConcurrentHashMap;
-import java.util.concurrent.CopyOnWriteArrayList;
-
+import org.apache.lucene.analysis.util.ResourceLoaderAware;
 import org.apache.solr.common.MapWriter;
+import org.apache.solr.common.SolrException;
 import org.apache.solr.common.cloud.ZkStateReader;
 import org.apache.solr.core.CoreContainer;
 import org.apache.solr.core.SolrCore;
 import org.apache.solr.core.SolrResourceLoader;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import java.io.Closeable;

Review comment:
   OK

##
File path: solr/core/src/java/org/apache/solr/pkg/PackageLoader.java
##
@@ -301,6 +295,32 @@ public String toString() {
   }
 }
   }
+  static class PackageResourceLoader extends SolrResourceLoader {
+
+PackageResourceLoader(String name, List classpath, Path instanceDir, 
ClassLoader parent) {
+  super(name, classpath, instanceDir, parent);
+}
+
+@Override
+public  boolean addToCoreAware(T obj) {
+  //do not do anything
+  //this class is
+  return false;
+}
+
+@Override
+public  boolean addToResourceLoaderAware(T obj) {
+  if (obj instanceof ResourceLoaderAware) {
+assertAwareCompatibility(ResourceLoaderAware.class, obj);
+try {
+  ((ResourceLoaderAware) obj).inform(this);

Review comment:
   I think the order has to be
   
   1. create instance
   2. `init()`
   3. `SolrCoreAware.inform()` , `ResourceLoaderAware.inform()`
   
   I've made the relevant changes. Pls review





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9382) Lucene's gradle version can't cope with Java 14

2020-06-01 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121007#comment-17121007
 ] 

Dawid Weiss commented on LUCENE-9382:
-

I corrected the script a bit - switches somehow didn't digest well.

I also wonder if this log verification needs to be applied to all projects or 
just limited to Solr (we don't do any logging in Lucene, do we?).

> Lucene's gradle version can't cope with Java 14
> ---
>
> Key: LUCENE-9382
> URL: https://issues.apache.org/jira/browse/LUCENE-9382
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Alan Woodward
>Assignee: Dawid Weiss
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If you have JDK 14 installed as your default java, then attempting to use 
> gradle within the lucene-solr project can result in errors, particularly if 
> you have other projects that use more recent gradle versions on the same 
> machine.
> ```
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.codehaus.groovy.vmplugin.v7.Java7
> at 
> org.codehaus.groovy.vmplugin.VMPluginFactory.(VMPluginFactory.java:43)
> at 
> org.codehaus.groovy.reflection.GroovyClassValueFactory.(GroovyClassValueFactory.java:35)
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9382) Lucene's gradle version can't cope with Java 14

2020-06-01 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120995#comment-17120995
 ] 

Dawid Weiss commented on LUCENE-9382:
-

This github PR brings Java 14 support. I had to disable Erick's log checker 
script because it breaks compilation AST somehow (internal to gradle). Erick - 
would you be able to take a look at this?

[https://github.com/apache/lucene-solr/pull/1549]

> Lucene's gradle version can't cope with Java 14
> ---
>
> Key: LUCENE-9382
> URL: https://issues.apache.org/jira/browse/LUCENE-9382
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Alan Woodward
>Assignee: Dawid Weiss
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If you have JDK 14 installed as your default java, then attempting to use 
> gradle within the lucene-solr project can result in errors, particularly if 
> you have other projects that use more recent gradle versions on the same 
> machine.
> ```
> java.lang.NoClassDefFoundError: Could not initialize class 
> org.codehaus.groovy.vmplugin.v7.Java7
> at 
> org.codehaus.groovy.vmplugin.VMPluginFactory.(VMPluginFactory.java:43)
> at 
> org.codehaus.groovy.reflection.GroovyClassValueFactory.(GroovyClassValueFactory.java:35)
> ```



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss opened a new pull request #1549: LUCENE-9382: update gradle to 6.4.1.

2020-06-01 Thread GitBox


dweiss opened a new pull request #1549:
URL: https://github.com/apache/lucene-solr/pull/1549


   Upgrades gradle to 6.4.1. The log-check scripts doesn't work for me and 
fails with an internal gradle ugliness (AST incompatibility of some sort). 
@ErickErickson would you be able to take a look and maybe ask gradle guys on 
slack?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14517) MM local params value is ignored in edismax queries with operators

2020-06-01 Thread Jason Gerlowski (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski reassigned SOLR-14517:
--

Assignee: Jason Gerlowski

> MM local params value is ignored in edismax queries with operators
> --
>
> Key: SOLR-14517
> URL: https://issues.apache.org/jira/browse/SOLR-14517
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 8.4.1
>Reporter: Yuriy Koval
>Assignee: Jason Gerlowski
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When specifying "mm" as a local parameter:
> {color:#e01e5a}q=\{!edismax mm="100%25" v=$qq}=foo %2Bbar=0=* 
> _query_{color}
>  {color:#1d1c1d}is not functionally equivalent to{color}
>  {{{color:#e01e5a}q=\{!edismax v=$qq}=foo %2Bbar=0=* 
> _query_=100%25{color}}}
>  It seems to be caused by the following code in 
> {color:#e01e5a}ExtendedDismaxQParser{color}
>  
> {code:java}
> // For correct lucene queries, turn off mm processing if no explicit mm spec 
> was provided
> // and there were explicit operators (except for AND).
> if (query instanceof BooleanQuery) {
>  // config.minShouldMatch holds the value of mm which MIGHT have come from 
> the user,
>  // but could also have been derived from q.op.
>  String mmSpec = config.minShouldMatch;
>  if (foundOperators(clauses, config.lowercaseOperators)) {
>  mmSpec = params.get(DisMaxParams.MM, "0%"); // Use provided mm spec if 
> present, otherwise turn off mm processing
>  }{code}
>  
> We need to check if user specified "mm" explicitly. We could change
> {code:java}
> mmSpec = params.get(DisMaxParams.MM, "0%");
> {code}
> to
> {code:java}
> mmSpec = config.solrParams.get(DisMaxParams.MM, "0%");
> {code}
> so we check local params too.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14517) MM local params value is ignored in edismax queries with operators

2020-06-01 Thread Jason Gerlowski (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski resolved SOLR-14517.

Fix Version/s: 8.6
   master (9.0)
   Resolution: Fixed

> MM local params value is ignored in edismax queries with operators
> --
>
> Key: SOLR-14517
> URL: https://issues.apache.org/jira/browse/SOLR-14517
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 8.4.1
>Reporter: Yuriy Koval
>Assignee: Jason Gerlowski
>Priority: Major
> Fix For: master (9.0), 8.6
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When specifying "mm" as a local parameter:
> {color:#e01e5a}q=\{!edismax mm="100%25" v=$qq}=foo %2Bbar=0=* 
> _query_{color}
>  {color:#1d1c1d}is not functionally equivalent to{color}
>  {{{color:#e01e5a}q=\{!edismax v=$qq}=foo %2Bbar=0=* 
> _query_=100%25{color}}}
>  It seems to be caused by the following code in 
> {color:#e01e5a}ExtendedDismaxQParser{color}
>  
> {code:java}
> // For correct lucene queries, turn off mm processing if no explicit mm spec 
> was provided
> // and there were explicit operators (except for AND).
> if (query instanceof BooleanQuery) {
>  // config.minShouldMatch holds the value of mm which MIGHT have come from 
> the user,
>  // but could also have been derived from q.op.
>  String mmSpec = config.minShouldMatch;
>  if (foundOperators(clauses, config.lowercaseOperators)) {
>  mmSpec = params.get(DisMaxParams.MM, "0%"); // Use provided mm spec if 
> present, otherwise turn off mm processing
>  }{code}
>  
> We need to check if user specified "mm" explicitly. We could change
> {code:java}
> mmSpec = params.get(DisMaxParams.MM, "0%");
> {code}
> to
> {code:java}
> mmSpec = config.solrParams.get(DisMaxParams.MM, "0%");
> {code}
> so we check local params too.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14517) MM local params value is ignored in edismax queries with operators

2020-06-01 Thread Jason Gerlowski (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120989#comment-17120989
 ] 

Jason Gerlowski commented on SOLR-14517:


Hey Yuriy, thanks for the changes!  I've reviewed them and merged to master and 
branch_8x, so this will be fixed starting in 8.6 (or 9.0, if there are no more 
minor 8.x releases).

Hope your first contribution went smoothly!  Welcome to the community.

> MM local params value is ignored in edismax queries with operators
> --
>
> Key: SOLR-14517
> URL: https://issues.apache.org/jira/browse/SOLR-14517
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 8.4.1
>Reporter: Yuriy Koval
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When specifying "mm" as a local parameter:
> {color:#e01e5a}q=\{!edismax mm="100%25" v=$qq}=foo %2Bbar=0=* 
> _query_{color}
>  {color:#1d1c1d}is not functionally equivalent to{color}
>  {{{color:#e01e5a}q=\{!edismax v=$qq}=foo %2Bbar=0=* 
> _query_=100%25{color}}}
>  It seems to be caused by the following code in 
> {color:#e01e5a}ExtendedDismaxQParser{color}
>  
> {code:java}
> // For correct lucene queries, turn off mm processing if no explicit mm spec 
> was provided
> // and there were explicit operators (except for AND).
> if (query instanceof BooleanQuery) {
>  // config.minShouldMatch holds the value of mm which MIGHT have come from 
> the user,
>  // but could also have been derived from q.op.
>  String mmSpec = config.minShouldMatch;
>  if (foundOperators(clauses, config.lowercaseOperators)) {
>  mmSpec = params.get(DisMaxParams.MM, "0%"); // Use provided mm spec if 
> present, otherwise turn off mm processing
>  }{code}
>  
> We need to check if user specified "mm" explicitly. We could change
> {code:java}
> mmSpec = params.get(DisMaxParams.MM, "0%");
> {code}
> to
> {code:java}
> mmSpec = config.solrParams.get(DisMaxParams.MM, "0%");
> {code}
> so we check local params too.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14517) MM local params value is ignored in edismax queries with operators

2020-06-01 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120988#comment-17120988
 ] 

ASF subversion and git services commented on SOLR-14517:


Commit a7fda365c5440ddf7b1eb4cb1c71fb4eb14890c4 in lucene-solr's branch 
refs/heads/branch_8x from Yuriy Koval
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a7fda36 ]

SOLR-14517 Obey "mm" local param on edismax queries with operators (#1540)

Prior to this commit query parsing looked for mm in query-params, but neglected 
to check local params for a subset of queries.

> MM local params value is ignored in edismax queries with operators
> --
>
> Key: SOLR-14517
> URL: https://issues.apache.org/jira/browse/SOLR-14517
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 8.4.1
>Reporter: Yuriy Koval
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> When specifying "mm" as a local parameter:
> {color:#e01e5a}q=\{!edismax mm="100%25" v=$qq}=foo %2Bbar=0=* 
> _query_{color}
>  {color:#1d1c1d}is not functionally equivalent to{color}
>  {{{color:#e01e5a}q=\{!edismax v=$qq}=foo %2Bbar=0=* 
> _query_=100%25{color}}}
>  It seems to be caused by the following code in 
> {color:#e01e5a}ExtendedDismaxQParser{color}
>  
> {code:java}
> // For correct lucene queries, turn off mm processing if no explicit mm spec 
> was provided
> // and there were explicit operators (except for AND).
> if (query instanceof BooleanQuery) {
>  // config.minShouldMatch holds the value of mm which MIGHT have come from 
> the user,
>  // but could also have been derived from q.op.
>  String mmSpec = config.minShouldMatch;
>  if (foundOperators(clauses, config.lowercaseOperators)) {
>  mmSpec = params.get(DisMaxParams.MM, "0%"); // Use provided mm spec if 
> present, otherwise turn off mm processing
>  }{code}
>  
> We need to check if user specified "mm" explicitly. We could change
> {code:java}
> mmSpec = params.get(DisMaxParams.MM, "0%");
> {code}
> to
> {code:java}
> mmSpec = config.solrParams.get(DisMaxParams.MM, "0%");
> {code}
> so we check local params too.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14517) MM local params value is ignored in edismax queries with operators

2020-06-01 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120971#comment-17120971
 ] 

ASF subversion and git services commented on SOLR-14517:


Commit cb7e948d2ed061cb1e3ec37d3ffae85b7dc15eda in lucene-solr's branch 
refs/heads/master from Yuriy Koval
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=cb7e948 ]

SOLR-14517 Obey "mm" local param on edismax queries with operators (#1540)

Prior to this commit query parsing looked for mm in query-params, but neglected 
to check local params for a subset of queries.

> MM local params value is ignored in edismax queries with operators
> --
>
> Key: SOLR-14517
> URL: https://issues.apache.org/jira/browse/SOLR-14517
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 8.4.1
>Reporter: Yuriy Koval
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> When specifying "mm" as a local parameter:
> {color:#e01e5a}q=\{!edismax mm="100%25" v=$qq}=foo %2Bbar=0=* 
> _query_{color}
>  {color:#1d1c1d}is not functionally equivalent to{color}
>  {{{color:#e01e5a}q=\{!edismax v=$qq}=foo %2Bbar=0=* 
> _query_=100%25{color}}}
>  It seems to be caused by the following code in 
> {color:#e01e5a}ExtendedDismaxQParser{color}
>  
> {code:java}
> // For correct lucene queries, turn off mm processing if no explicit mm spec 
> was provided
> // and there were explicit operators (except for AND).
> if (query instanceof BooleanQuery) {
>  // config.minShouldMatch holds the value of mm which MIGHT have come from 
> the user,
>  // but could also have been derived from q.op.
>  String mmSpec = config.minShouldMatch;
>  if (foundOperators(clauses, config.lowercaseOperators)) {
>  mmSpec = params.get(DisMaxParams.MM, "0%"); // Use provided mm spec if 
> present, otherwise turn off mm processing
>  }{code}
>  
> We need to check if user specified "mm" explicitly. We could change
> {code:java}
> mmSpec = params.get(DisMaxParams.MM, "0%");
> {code}
> to
> {code:java}
> mmSpec = config.solrParams.get(DisMaxParams.MM, "0%");
> {code}
> so we check local params too.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] gerlowskija merged pull request #1540: SOLR-14517 MM local params value is ignored in edismax queries with operators

2020-06-01 Thread GitBox


gerlowskija merged pull request #1540:
URL: https://github.com/apache/lucene-solr/pull/1540


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14422) Solr 8.5 Admin UI shows Angular placeholders on first load / refresh

2020-06-01 Thread David Eric Pugh (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120928#comment-17120928
 ] 

David Eric Pugh commented on SOLR-14422:


I've been learning more Angular 1 this past year than I wanted, as part of 
supporting github.com/o19s/quepid.   My understanding of the purpose of ngCloak 
is for this same issue, and I've fixed some similar initialization issues in 
Quepid with this approach.  I am headed out of town for a few days, but if 
[~krisden]doesn't get a chance to look at it, ping me and I can next week.

> Solr 8.5 Admin UI shows Angular placeholders on first load / refresh
> 
>
> Key: SOLR-14422
> URL: https://issues.apache.org/jira/browse/SOLR-14422
> Project: Solr
>  Issue Type: Bug
>  Components: Admin UI
>Affects Versions: 8.5, 8.5.1, 8.5.2
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14422.patch, image-2020-04-21-14-51-18-923.png
>
>
> When loading / refreshing the Admin UI in 8.5.1, it briefly but _visibly_ 
> shows a placeholder for the "SolrCore Initialization Failures" error message, 
> with a lot of redness. It looks like there is a real problem. Obviously the 
> message then disappears, and it can be ignored.
> However, if I was a first time user, it would not give me confidence that 
> everything is okay. In a way, an error message that appears briefly then 
> disappears before I can finish reading it is worse than one which just stays 
> there.
>  
> Here's a screenshot of what I mean  !image-2020-04-21-14-51-18-923.png!
>  
> I suspect that SOLR-14132 will have caused this
>  
> From a (very) brief googling it seems like using the ng-cloak attribute is 
> the right way to fix this, and it certainly seems to work for me. 
> https://docs.angularjs.org/api/ng/directive/ngCloak
> I will attach a patch with it, but if someone who actually knows Angular etc 
> has a better approach then please go for it



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14422) Solr 8.5 Admin UI shows Angular placeholders on first load / refresh

2020-06-01 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14422:

Affects Version/s: 8.5.2

> Solr 8.5 Admin UI shows Angular placeholders on first load / refresh
> 
>
> Key: SOLR-14422
> URL: https://issues.apache.org/jira/browse/SOLR-14422
> Project: Solr
>  Issue Type: Bug
>  Components: Admin UI
>Affects Versions: 8.5, 8.5.1, 8.5.2
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14422.patch, image-2020-04-21-14-51-18-923.png
>
>
> When loading / refreshing the Admin UI in 8.5.1, it briefly but _visibly_ 
> shows a placeholder for the "SolrCore Initialization Failures" error message, 
> with a lot of redness. It looks like there is a real problem. Obviously the 
> message then disappears, and it can be ignored.
> However, if I was a first time user, it would not give me confidence that 
> everything is okay. In a way, an error message that appears briefly then 
> disappears before I can finish reading it is worse than one which just stays 
> there.
>  
> Here's a screenshot of what I mean  !image-2020-04-21-14-51-18-923.png!
>  
> I suspect that SOLR-14132 will have caused this
>  
> From a (very) brief googling it seems like using the ng-cloak attribute is 
> the right way to fix this, and it certainly seems to work for me. 
> https://docs.angularjs.org/api/ng/directive/ngCloak
> I will attach a patch with it, but if someone who actually knows Angular etc 
> has a better approach then please go for it



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] markharwood edited a comment on pull request #1541: RegExp - add case insensitive matching option

2020-06-01 Thread GitBox


markharwood edited a comment on pull request #1541:
URL: https://github.com/apache/lucene-solr/pull/1541#issuecomment-636710038


   On reflection, you're right - the single flag is trappy.
   I'd like to refactor this class to make this simpler. The root problem we 
have is propagating parser state (flags/options) down to the objects that 
represent clauses in the parse tree. This is made difficult by the fact that 
RegExp is a single class representing both the parser and the parsed nodes.
   I suggest refactoring so that :
   1) RegExp remains the user-facing class with the public constructor and has 
the parsing logic
   2) We use a new private class RegExpClause to hold clause state, but being 
an inner class it has access to the flags in the outer RegExp instance that 
contains it.
   
   This should solve the problem of propagating settings and give us a sounder 
footing to build on.
   Should I do this refactor as part of this PR or another @jpountz ?
   
   One issue is that this would technically be a breaking change as 
https://issues.apache.org/jira/projects/LUCENE/issues/LUCENE-9371 opened up the 
internal state of the parser and we will change the class of nodes.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul commented on pull request #1504: SOLR-14462: cache more than one autoscaling session

2020-06-01 Thread GitBox


noblepaul commented on pull request #1504:
URL: https://github.com/apache/lucene-solr/pull/1504#issuecomment-636748756


   @murblanc if you are done with the changes you planned to do , I shall do a 
review and merge this soon



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14524) Harden MultiThreadedOCPTest

2020-06-01 Thread Ilan Ginzburg (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120884#comment-17120884
 ] 

Ilan Ginzburg commented on SOLR-14524:
--

[https://github.com/apache/lucene-solr/pull/1548] is doing three things:
* Have the test wait for processing to have started in Overseer and check that 
processing hasn't completed yet,
* Fail with meaningful messages when the test didn't really fail but execution 
order made it impossible to run the test correctly,
* Significantly increase the runtime of the "long" task (from 1 to 10 seconds), 
yet remove the wait for that task to complete. Idea is to reduce further timing 
issues causing test failures without slowing down the test (test is likely 
faster now than it was, but the specific subtest being changed here contributes 
only a small fraction of total test runtime).

> Harden MultiThreadedOCPTest
> ---
>
> Key: SOLR-14524
> URL: https://issues.apache.org/jira/browse/SOLR-14524
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Ilan Ginzburg
>Priority: Minor
>  Labels: test
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> {{MultiThreadedOCPTest.test()}} fails occasionally in Jenkins because of 
> timing of tasks enqueue to the Collection API queue.
> This test in {{testFillWorkQueue()}} enqueues a large number of tasks (115, 
> more than the 100 Collection API parallel executors) to the Collection API 
> queue for a collection COLL_A, then observes a short delay and enqueues a 
> task for another collection COLL_B.
>  It verifies that the COLL_B task (that does not require the same lock as the 
> COLL_A tasks) completes before the third COLL_A task.
> Test failures happen because when enqueues are slowed down enough, the first 
> 3 tasks on COLL_A complete even before the COLL_B task gets enqueued!
> In one sample failed Jenkins test execution, the COLL_B task enqueue happened 
> 1275ms after the enqueue of the first COLL_A, leaving plenty of time for a 
> few (and possibly all) COLL_A tasks to complete.
> Fix will be along the lines of:
>  * Make the “blocking” COLL_A task longer to execute (currently 1 second) to 
> compensate for slow enqueues.
>  * Verify the COLL_B task (a 1ms task) finishes before the long running 
> COLL_A task does. This would be a good indication that even though the 
> collection queue was filled with tasks waiting for a busy lock, a non 
> competing task was picked and executed right away.
>  * Delay the enqueue of the COLL_B task to the end of processing of the first 
> COLL_A task. This would guarantee that COLL_B is enqueued once at least some 
> COLL_A tasks started processing at the Overseer. Possibly also verify that 
> the long running task of COLL_A didn't finish execution yet when the COLL_B 
> task is enqueued...
>  * It might be possible to set a (very) long duration for the slow task of 
> COLL_A (to be less vulnerable to execution delays) without requiring the test 
> to wait for that task to complete, but only wait for the COLL_B task to 
> complete (so the test doesn't run for too long).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc opened a new pull request #1548: SOLR-14524: Harden MultiThreadedOCPTest testFillWorkQueue()

2020-06-01 Thread GitBox


murblanc opened a new pull request #1548:
URL: https://github.com/apache/lucene-solr/pull/1548


   
   # Description
   
   Make MultiThreadedOCPTest.testFillWorkQueue() less vulnerable to timing 
issues when test or overseer code are being slowed down by external factors 
(load, GC etc).
   
   # Solution
   
   A combination of verifying preconditions (to fail with a meaningful messages 
helping pinpoint issues for future test hardening), longer task execution time 
(that **does not delay** total test runtime) and do light synchronization 
between test steps and Overseeer processing progress. 
   
   # Tests
   
   This _is_ a test.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [X] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [X] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [X] I have given Solr maintainers 
[access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
 to contribute to my PR branch. (optional but recommended)
   - [X] I have developed this patch against the `master` branch.
   - [X] I have run `ant precommit` and the appropriate test suite.
   - [ ] I have added tests for my changes.
   - [ ] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] markharwood commented on pull request #1541: RegExp - add case insensitive matching option

2020-06-01 Thread GitBox


markharwood commented on pull request #1541:
URL: https://github.com/apache/lucene-solr/pull/1541#issuecomment-636710038


   On reflection, you're right - the single flag is trappy.
   I'd like to refactor this class to make this simpler. The root problem we 
have is propagating parser state (flags/options) down to the objects that 
represent clauses in the parse tree. This is made difficult by the fact that 
RegExp is a single class representing both the parser and the parsed nodes.
   I suggest refactoring so that :
   1) RegExp remains the user-facing class with the public constructor and has 
the parsing logic
   2) We use a new private class RegExpClause to hold clause state, but being 
an inner class it has access to the flags in the outer RegExp instance that 
contains it.
   
   This should solve the problem of propagating settings and give us a sounder 
footing to build on.
   Should I do this refactor as part of this PR or another @jpountz ?
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9301) Gradle: Jar MANIFEST incomplete

2020-06-01 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved LUCENE-9301.
-
Resolution: Fixed

> Gradle: Jar MANIFEST incomplete
> ---
>
> Key: LUCENE-9301
> URL: https://issues.apache.org/jira/browse/LUCENE-9301
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Affects Versions: master (9.0)
>Reporter: Jan Høydahl
>Assignee: Dawid Weiss
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: LUCENE-9301.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After building with gradle, the MANIFEST.MF file for e.g. solr-core.jar 
> containst
> {noformat}
> Manifest-Version: 1.0
> {noformat}
> While when building with ant, it says
> {noformat}
> Manifest-Version: 1.0
> Ant-Version: Apache Ant 1.10.7
> Created-By: 11.0.6+10 (AdoptOpenJDK)
> Extension-Name: org.apache.solr
> Specification-Title: Apache Solr Search Server: solr-core
> Specification-Version: 9.0.0
> Specification-Vendor: The Apache Software Foundation
> Implementation-Title: org.apache.solr
> Implementation-Version: 9.0.0-SNAPSHOT 9b5542ad55da601e0bdfda96bad8c2c
>  cabbbc397 - janhoy - 2020-04-01 16:24:09
> Implementation-Vendor: The Apache Software Foundation
> X-Compile-Source-JDK: 11
> X-Compile-Target-JDK: 11
> {noformat}
> In addition, with ant, the META-INF folder also contains LICENSE.txt and 
> NOTICE.txt files.
> There is a macro {{build-manifest}} in common-build.xml that seems to build 
> the manifest.
> The effect of this is e.g. that spec an implementation versions do not show 
> in Solr Admin UI



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9301) Gradle: Jar MANIFEST incomplete

2020-06-01 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120824#comment-17120824
 ] 

ASF subversion and git services commented on LUCENE-9301:
-

Commit da3dbb1921dd266ff4e78e6eabcc497958cb3933 in lucene-solr's branch 
refs/heads/master from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=da3dbb1 ]

LUCENE-9301: include build time and user name only in non-snapshot builds so 
that jars are not recompiled on each build in development.


> Gradle: Jar MANIFEST incomplete
> ---
>
> Key: LUCENE-9301
> URL: https://issues.apache.org/jira/browse/LUCENE-9301
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Affects Versions: master (9.0)
>Reporter: Jan Høydahl
>Assignee: Dawid Weiss
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: LUCENE-9301.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After building with gradle, the MANIFEST.MF file for e.g. solr-core.jar 
> containst
> {noformat}
> Manifest-Version: 1.0
> {noformat}
> While when building with ant, it says
> {noformat}
> Manifest-Version: 1.0
> Ant-Version: Apache Ant 1.10.7
> Created-By: 11.0.6+10 (AdoptOpenJDK)
> Extension-Name: org.apache.solr
> Specification-Title: Apache Solr Search Server: solr-core
> Specification-Version: 9.0.0
> Specification-Vendor: The Apache Software Foundation
> Implementation-Title: org.apache.solr
> Implementation-Version: 9.0.0-SNAPSHOT 9b5542ad55da601e0bdfda96bad8c2c
>  cabbbc397 - janhoy - 2020-04-01 16:24:09
> Implementation-Vendor: The Apache Software Foundation
> X-Compile-Source-JDK: 11
> X-Compile-Target-JDK: 11
> {noformat}
> In addition, with ant, the META-INF folder also contains LICENSE.txt and 
> NOTICE.txt files.
> There is a macro {{build-manifest}} in common-build.xml that seems to build 
> the manifest.
> The effect of this is e.g. that spec an implementation versions do not show 
> in Solr Admin UI



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14491) DocTransformers don't use correct principal using Kerberos

2020-06-01 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120819#comment-17120819
 ] 

ASF subversion and git services commented on SOLR-14491:


Commit 7fe79e3eeafd9691e2ba285560fed4db03525ec0 in lucene-solr's branch 
refs/heads/branch_8x from Ishan Chattopadhyaya
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7fe79e3 ]

SOLR-14491: Intercepting internode requests in KerberosPlugin when HTTP/2 
client is used


> DocTransformers don't use correct principal using Kerberos
> --
>
> Key: SOLR-14491
> URL: https://issues.apache.org/jira/browse/SOLR-14491
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-14491.patch
>
>
> This issue was reported by [~moshebla] here:
> [https://lucene.472066.n3.nabble.com/Getting-authenticated-user-inside-DocTransformer-plugin-td4454941.html]
> This is a problem since the original user principal isn't passed along for 
> doctransformers (and possibly other internode query operations).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14491) DocTransformers don't use correct principal using Kerberos

2020-06-01 Thread Ishan Chattopadhyaya (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya resolved SOLR-14491.
-
Fix Version/s: 8.6
   Resolution: Fixed

Thanks [~moshebla]!

> DocTransformers don't use correct principal using Kerberos
> --
>
> Key: SOLR-14491
> URL: https://issues.apache.org/jira/browse/SOLR-14491
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14491.patch
>
>
> This issue was reported by [~moshebla] here:
> [https://lucene.472066.n3.nabble.com/Getting-authenticated-user-inside-DocTransformer-plugin-td4454941.html]
> This is a problem since the original user principal isn't passed along for 
> doctransformers (and possibly other internode query operations).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14491) DocTransformers don't use correct principal using Kerberos

2020-06-01 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120818#comment-17120818
 ] 

ASF subversion and git services commented on SOLR-14491:


Commit 1dda6848760e0a24e8f8b10f112081d527336eaa in lucene-solr's branch 
refs/heads/master from Ishan Chattopadhyaya
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1dda684 ]

SOLR-14491: Intercepting internode requests in KerberosPlugin when HTTP/2 
client is used


> DocTransformers don't use correct principal using Kerberos
> --
>
> Key: SOLR-14491
> URL: https://issues.apache.org/jira/browse/SOLR-14491
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-14491.patch
>
>
> This issue was reported by [~moshebla] here:
> [https://lucene.472066.n3.nabble.com/Getting-authenticated-user-inside-DocTransformer-plugin-td4454941.html]
> This is a problem since the original user principal isn't passed along for 
> doctransformers (and possibly other internode query operations).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-14491) DocTransformers don't use correct principal using Kerberos

2020-06-01 Thread Ishan Chattopadhyaya (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reassigned SOLR-14491:
---

Assignee: Ishan Chattopadhyaya

> DocTransformers don't use correct principal using Kerberos
> --
>
> Key: SOLR-14491
> URL: https://issues.apache.org/jira/browse/SOLR-14491
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
> Attachments: SOLR-14491.patch
>
>
> This issue was reported by [~moshebla] here:
> [https://lucene.472066.n3.nabble.com/Getting-authenticated-user-inside-DocTransformer-plugin-td4454941.html]
> This is a problem since the original user principal isn't passed along for 
> doctransformers (and possibly other internode query operations).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] tejassawai commented on pull request #763: SOLR-13608: Incremental backup for Solr

2020-06-01 Thread GitBox


tejassawai commented on pull request #763:
URL: https://github.com/apache/lucene-solr/pull/763#issuecomment-636670244


   It is still in Open state, which means we cannot have the incremental backup 
as of now in apache solr, right? Or is there any possibility to do so?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org