[jira] [Commented] (SOLR-14397) Vector Search in Solr

2020-04-20 Thread Trey Grainger (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088265#comment-17088265
 ] 

Trey Grainger commented on SOLR-14397:
--

Documenting some implementation notes for some non-obvious decisions I've made 
along the way. Mostly notes for me to revisit, but also possibly some insight 
for anyone reviewing (feedback welcome).
 # It feels like a cleaner document API design may be for the field input 
format to switch to an array instead of a delimited string. So, for example, 
change from this on documents:
{code:java}
{"vectors":"1.0,5.0,0.0,0.0,0.0,4.0,4.0,3.0|0.0,5.0,0.0,0.0,0.0,3.0,5.0,4.0"},
{"id":"6", "vectors":"0.0,5.0,4.0,0.0,4.0,1.0,3.0,3.0"}{code}
to this:
{code:java}
{"id":"6", 
"vectors":[[1.0,5.0,0.0,0.0,0.0,4.0,4.0,3.0],[0.0,5.0,0.0,0.0,0.0,3.0,5.0,4.0]]},
{"id":"6", "vectors":[0.0,5.0,4.0,0.0,4.0,1.0,3.0,3.0]}{code}
I didn't do this because the document parser doesn't understand 
multi-dimensional arrays (arrays of arrays) and because it wants to treat an 
array as a multi-valued field of numbers (and add each number as a new field on 
the document) instead of the full array being understood as the value. If the 
parser were field-aware and could delegate the parsing this would make this 
much easier (and would make it possible to extend support for fields with 
interesting kinds of objects in the future), but for now I side-stepped all of 
this and am just passing in a delimited string instead because it works well. 

Additionally, I experimented with supporting multiple vectors per field through 
making the field {{multiValued=true}}, but this multiple fields and it wasn't 
immediately obvious to me that you could easily recombine each of those fields 
into one field later, which is needed to encode them in order and for efficient 
usage.

As such, I arrived at the current "pass in the whole list of vectors as a 
delimited string" approach you currently see. Happy to explore this input 
format structure further if others feel it is important, just didn't want to 
get bogged down on those rabbit trails to enable syntactic sugar before getting 
the main functionality working.
 # The {{useDocValuesAsStored}} option on fields is quite unintuitive IMHO, and 
sometimes acts more as a suggestion (not always respected as the user would 
expect) than a reliable, user-specified choice. Specifically, it assumes that 
{{docValues}} and {{stored}} values are always equal and interchangeable if 
both are set (which isn't the case) and chooses one or the other based upon 
performance optimizations instead of based upon the user-specified choice. It 
has some hacks added in already to handle point field (numeric) types that 
already violated this principle of {{stored=docValues}} values, with a comment 
to revisit if other types also need to work around. Problematically, since 
{{stored}} fields make a call to "{{toExternal}}" to translate the stored value 
to a readable value, but {{docValues}} make no such call, this means that Solr 
choosing to return {{docValues}} instead of {{stored}} values (when the user 
set useDocValuesAsStored=false) means that the display version in the results 
is going be incorrect in cases where a different internal encoding is taking 
place on the docValues (such as the binary compression for this DenseVector 
field). I made a change to the {{SolrDocumentFetcher}} to overcome this that I 
think better respects the {{useDocValuesAsStored}} option by setting it as a 
requirement instead of a suggestion, but the approach here warrants a separate 
conversation on another JIRA, so I'll file one soon.


 # Just noticed there is an "{{org.apace.solr.common.util.Base64}}" class that 
implements Base64 encoding. I've been using the Java "{{java.util.Base64}}" 
class. The Solr version was implemented in 2009 and it looks like the Java 
version didn't come out later until Java 8, so maybe the Solr version is just 
old? If so, it would be worthwhile to validate there are no differences that 
favor the Solr version (encoding or performance) and consider replacing the 
Solr version with the Java version. Otherwise, if anyone knows a good reason to 
prefer the Solr version instead, I'm happy to switch over.

> Vector Search in Solr
> -
>
> Key: SOLR-14397
> URL: https://issues.apache.org/jira/browse/SOLR-14397
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Trey Grainger
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Search engines have traditionally relied upon token-based matching (typically 
> keywords) on an inverted index, plus relevance ranking based upon keyword 
> occurrence statistics. This can be viewed as a "sparse vector” match (where 
> each term is a one-hot encoded 

[jira] [Comment Edited] (SOLR-14397) Vector Search in Solr

2020-04-20 Thread Trey Grainger (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088227#comment-17088227
 ] 

Trey Grainger edited comment on SOLR-14397 at 4/21/20, 2:35 AM:


Ok, I pushed an initial commit of the DenseVector field type last night. It 
currently supports two kinds of encoding: "{{string}}" (encodes the full vector 
as a raw string) and "{{base64}}" (encoded a {{float[]}} of the vector in 
Base64). The latter should be more efficient, though I haven't done performance 
testing yet (that's next on my plate). After I get performance benchmarks in 
place, I plan to implement some more efficient encodings (ideally something 
like bfloat16, which Tensorflow and other DL frameworks use now).

The {{vector_cosine}} and {{vector_dotproduct}} functions will now work on any 
of the following:
 # A regular {{String}} field with a vector represented in the field (no 
validation of contents)
 # A {{DenseVector}} field with the encoding set to "{{string}}" (functionally 
equivalent to the first, but with index-time validation that the content 
represents vectors)
 # A {{DenseVector}} field with the encoding set to "{{base64}}" (more compact 
and efficient encoding/processing of {{float[]s}})

Changes worth noting:
 # Renamed "{{average}}" to "{{avg}}" in the score selectors to be more 
consistent with the rest of Solr. The third argument in the function queries 
can now be (selector = {{first}}, {{last}}, {{min}}, {{max}}, or {{avg}})
 # I put a "{{dimensions}}" property on the field type, but it doesn't do 
anything yet. Will implement this soon, but essentially it allows you to force 
explicit validation on the length of any vectors added to the field to prevent 
down-stream issues at query time. By default, it allows variable-length 
vectors, which may be undesirable in most use cases. Once implemented, you can 
do {{required="true dimensions="128"}}, for example, to enforce that all 
documents must have at least one vector and that it must be of size {{128}}.

I haven't sufficiently tested edge cases for other field configuration options 
yet ({{indexed}}/{{stored}}/{{docValues}}/etc.), but the field and function 
query / value sources are usable and work well on the happy path currently. If 
you'd like to try it out, here's an updated usage guide:
h1. How to Use:

*Build and Start*
{code:java}
ant server && bin/solr start -c -a 
"agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6900"
{code}
*Create Collection*
{code:java}
curl -H "Content-Type: application/json" \
"http://localhost:8983/solr/admin/collections?action=CREATE=vectors=_default=1;
{code}
*Add Dense Vector FieldType to Schema*
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
  "add-field-type" : {
 "name":"denseVector",
 "class":"solr.DenseVectorField",
 "encoding": "base64"
   }
}' http://localhost:8983/solr/vectors/schema
{code}
*Add Vectors Field to Schema*
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
  "add-field":{
 "name":"vectors_v",
 "type":"denseVector",
 "indexed": false,
 "stored": true,
 "docValues": true,
 "multiValued": false,
 "useDocValuesAsStored": false
  }
}' http://localhost:8983/solr/vectors/schema
{code}
*Index Documents*
{code:java}
curl -X POST -H "Content-Type: application/json"  
"http://localhost:8983/solr/vectors/update?commit=true; \
--data-binary ' [ {"id": "1", "name_s":"donut", "vectors_v": 
"5.0,0.0,1.0,5.0,0.0,4.0,5.0,1.0|4.0,0.0,1.2,3.0,0.3,3.0,3.0,0.75|6.0,0.0,2.0,4.0,0.0,5.0,6.0,0.8"},
 {"id": "2", "name_s":"apple juice", 
"vectors_v":"1.0,5.0,0.0,0.0,0.0,4.0,4.0,3.0|0.0,5.0,0.0,0.0,0.0,3.0,5.0,4.0"}, 
{"id": "3", "name_s":"cappuccino", 
"vectors_v":"0.0,5.0,3.0,0.0,4.0,1.0,2.0,3.0"}, {"id": "4", "name_s":"cheese 
pizza", "vectors_v":"5.0,0.0,4.0,4.0,0.0,1.0,5.0,2.0"}, {"id": "5", 
"name_s":"green tea", "vectors_v":"0.0,5.0,0.0,0.0,2.0,1.0,1.0,5.0"}, {"id": 
"6", "name_s":"latte", "vectors_v":"0.0,5.0,4.0,0.0,4.0,1.0,3.0,3.0"}, {"id": 
"7", "name_s":"soda", "vectors_v":"0.0,5.0,0.0,0.0,3.0,5.0,5.0,0.0"}, {"id": 
"8", "name_s":"cheese bread sticks", 
"vectors_v":"5.0,0.0,4.0,5.0,0.0,1.0,4.0,2.0"}, {"id": "9", "name_s":"water", 
"vectors_v":"0.0,5.0,0.0,0.0,0.0,0.0,0.0,5.0"}, {"id": "10", "name_s":"cinnamon 
bread sticks", "vectors_v":"5.0,0.0,1.0,5.0,0.0,3.0,4.0,2.0"} ]'
{code}
*Send Query*
{code:java}
curl -H "Content-Type: application/json" \
"http://localhost:8983/solr/vectors/select?q=*:*=id,name:name_s,cosine:\$func,vectors:vectors_v=vector_cosine(\$donut_vector,vectors_v,max)=\$func%20desc=11_vector=5.0,0.0,1.0,5.0,0.0,4.0,5.0,1.0"
{code}
*Response*
{code:java}
{
  "responseHeader":{
"zkConnected":true,
"status":0,
"QTime":2,
"params":{
  "q":"*:*",
  "func":"vector_cosine($donut_vector,vectors_v,max)",
  

[jira] [Commented] (SOLR-14397) Vector Search in Solr

2020-04-20 Thread Trey Grainger (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088227#comment-17088227
 ] 

Trey Grainger commented on SOLR-14397:
--

Ok, I pushed an initial commit of the DenseVector field type last night. It 
currently supports two kinds of encoding: "{{string}}" (encodes the full vector 
as a raw string) and "{{base64}}" (encoded a {{float[]}} of the vector in 
Base64). The latter should be more efficient, though I haven't done performance 
testing yet (that's next on my plate). After I get performance benchmarks in 
place, I plan to implement some more efficient encodings (ideally something 
like bfloat16, which Tensorflow and other DL frameworks use now).

The vector_cosine and vector_dotproduct functions will now work on any of the 
following:
 # A regular String field with a vector represented in the field (no validation 
of contents)
 # A DenseVector field with the encoding set to "{{string}}" (functionally 
equivalent to the first, but with index-time validation that the content 
represents vectors)
 # A DenseVector field with the encoding set to "{{base64}}" (more compact and 
efficient encoding/processing of {{float[]s}})

Changes worth noting:
 # Renamed "{{average}}" to "{{avg}}" in the score selectors to be more 
consistent with the rest of Solr. The third argument in the function queries 
can now be (selector = {{first}}, {{last}}, {{min}}, {{max}}, or {{avg}})
 # I put a "{{dimensions}}" property on the field type, but it doesn't do 
anything yet. Will implement this soon, but essentially it allows you to force 
explicit validation on the length of any vectors added to the field to prevent 
down-stream issues at query time. By default, it allows variable-length 
vectors, which may be undesirable in most use cases. Once implemented, you can 
do {{required="true dimensions="128"}}, for example, to enforce that all 
documents must have at least one vector and that it must be of size {{128}}.

I haven't sufficiently tested edge cases for other field configuration options 
yet ({{indexed}}/{{stored}}/{{docValues}}/etc.), but the field and function 
query / value sources are usable and work well on the happy path currently. If 
you'd like to try it out, here's an updated usage guide:
h1. How to Use:

*Build and Start*
{code:java}
ant server && bin/solr start -c -a 
"agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=6900"
{code}

*Create Collection*
{code:java}
curl -H "Content-Type: application/json" \
"http://localhost:8983/solr/admin/collections?action=CREATE=vectors=_default=1;
{code}

*Add Dense Vector FieldType to Schema*
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
  "add-field-type" : {
 "name":"denseVector",
 "class":"solr.DenseVectorField",
 "encoding": "base64"
   }
}' http://localhost:8983/solr/vectors/schema
{code}

*Add Vectors Field to Schema*
{code:java}
curl -X POST -H 'Content-type:application/json' --data-binary '{
  "add-field":{
 "name":"vectors_v",
 "type":"denseVector",
 "indexed": false,
 "stored": true,
 "docValues": true,
 "multiValued": false,
 "useDocValuesAsStored": false
  }
}' http://localhost:8983/solr/vectors/schema
{code}

*Index Documents*
{code:java}
curl -X POST -H "Content-Type: application/json"  
"http://localhost:8983/solr/vectors/update?commit=true; \
--data-binary ' [ {"id": "1", "name_s":"donut", "vectors_v": 
"5.0,0.0,1.0,5.0,0.0,4.0,5.0,1.0|4.0,0.0,1.2,3.0,0.3,3.0,3.0,0.75|6.0,0.0,2.0,4.0,0.0,5.0,6.0,0.8"},
 {"id": "2", "name_s":"apple juice", 
"vectors_v":"1.0,5.0,0.0,0.0,0.0,4.0,4.0,3.0|0.0,5.0,0.0,0.0,0.0,3.0,5.0,4.0"}, 
{"id": "3", "name_s":"cappuccino", 
"vectors_v":"0.0,5.0,3.0,0.0,4.0,1.0,2.0,3.0"}, {"id": "4", "name_s":"cheese 
pizza", "vectors_v":"5.0,0.0,4.0,4.0,0.0,1.0,5.0,2.0"}, {"id": "5", 
"name_s":"green tea", "vectors_v":"0.0,5.0,0.0,0.0,2.0,1.0,1.0,5.0"}, {"id": 
"6", "name_s":"latte", "vectors_v":"0.0,5.0,4.0,0.0,4.0,1.0,3.0,3.0"}, {"id": 
"7", "name_s":"soda", "vectors_v":"0.0,5.0,0.0,0.0,3.0,5.0,5.0,0.0"}, {"id": 
"8", "name_s":"cheese bread sticks", 
"vectors_v":"5.0,0.0,4.0,5.0,0.0,1.0,4.0,2.0"}, {"id": "9", "name_s":"water", 
"vectors_v":"0.0,5.0,0.0,0.0,0.0,0.0,0.0,5.0"}, {"id": "10", "name_s":"cinnamon 
bread sticks", "vectors_v":"5.0,0.0,1.0,5.0,0.0,3.0,4.0,2.0"} ]'
{code}

*Send Query*
{code:java}
curl -H "Content-Type: application/json" \
"http://localhost:8983/solr/vectors/select?q=*:*=id,name:name_s,cosine:\$func,vectors:vectors_v=vector_cosine(\$donut_vector,vectors_v,max)=\$func%20desc=11_vector=5.0,0.0,1.0,5.0,0.0,4.0,5.0,1.0"
{code}

*Response*
{code:java}
{
  "responseHeader":{
"zkConnected":true,
"status":0,
"QTime":2,
"params":{
  "q":"*:*",
  "func":"vector_cosine($donut_vector,vectors_v,max)",
  "donut_vector":"5.0,0.0,1.0,5.0,0.0,4.0,5.0,1.0",
  

[jira] [Commented] (SOLR-7642) Should launching Solr in cloud mode using a ZooKeeper chroot create the chroot znode if it doesn't exist?

2020-04-20 Thread Isabelle Giguere (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088204#comment-17088204
 ] 

Isabelle Giguere commented on SOLR-7642:


Patch attached, off current master.  This patch is still the original 
proposition : just add -DcreateZkRoot=true on start-up.

Ex : "/opt/solr/bin/solr start -f -z zookeeper:2181/solr555 -DcreateZkRoot=true"

> Should launching Solr in cloud mode using a ZooKeeper chroot create the 
> chroot znode if it doesn't exist?
> -
>
> Key: SOLR-7642
> URL: https://issues.apache.org/jira/browse/SOLR-7642
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Priority: Minor
> Attachments: SOLR-7642.patch, SOLR-7642.patch, SOLR-7642.patch, 
> SOLR-7642_tag_7.5.0.patch, SOLR-7642_tag_7.5.0_proposition.patch
>
>
> If you launch Solr for the first time in cloud mode using a ZooKeeper 
> connection string that includes a chroot leads to the following 
> initialization error:
> {code}
> ERROR - 2015-06-05 17:15:50.410; [   ] org.apache.solr.common.SolrException; 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/lan
> at 
> org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:113)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:339)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:140)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:110)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505)
> {code}
> The work-around for this is to use the scripts/cloud-scripts/zkcli.sh script 
> to create the chroot znode (bootstrap action does this).
> I'm wondering if we shouldn't just create the znode if it doesn't exist? Or 
> is that some violation of using a chroot?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-7642) Should launching Solr in cloud mode using a ZooKeeper chroot create the chroot znode if it doesn't exist?

2020-04-20 Thread Isabelle Giguere (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-7642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Isabelle Giguere updated SOLR-7642:
---
Attachment: SOLR-7642.patch

> Should launching Solr in cloud mode using a ZooKeeper chroot create the 
> chroot znode if it doesn't exist?
> -
>
> Key: SOLR-7642
> URL: https://issues.apache.org/jira/browse/SOLR-7642
> Project: Solr
>  Issue Type: Improvement
>Reporter: Timothy Potter
>Priority: Minor
> Attachments: SOLR-7642.patch, SOLR-7642.patch, SOLR-7642.patch, 
> SOLR-7642_tag_7.5.0.patch, SOLR-7642_tag_7.5.0_proposition.patch
>
>
> If you launch Solr for the first time in cloud mode using a ZooKeeper 
> connection string that includes a chroot leads to the following 
> initialization error:
> {code}
> ERROR - 2015-06-05 17:15:50.410; [   ] org.apache.solr.common.SolrException; 
> null:org.apache.solr.common.cloud.ZooKeeperException: A chroot was specified 
> in ZkHost but the znode doesn't exist. localhost:2181/lan
> at 
> org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:113)
> at org.apache.solr.core.CoreContainer.load(CoreContainer.java:339)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:140)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:110)
> at 
> org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:138)
> at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:852)
> at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:298)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1349)
> at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1342)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:741)
> at 
> org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:505)
> {code}
> The work-around for this is to use the scripts/cloud-scripts/zkcli.sh script 
> to create the chroot znode (bootstrap action does this).
> I'm wondering if we shouldn't just create the znode if it doesn't exist? Or 
> is that some violation of using a chroot?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14420) Address AuthenticationPlugin TODO redeclare params as HttpServletRequest & HttpServletResponse

2020-04-20 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088197#comment-17088197
 ] 

David Smiley commented on SOLR-14420:
-

+1 looks good Mike.  Thanks for doing this.

> Address AuthenticationPlugin TODO redeclare params as HttpServletRequest & 
> HttpServletResponse
> --
>
> Key: SOLR-14420
> URL: https://issues.apache.org/jira/browse/SOLR-14420
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mike Drob
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This was noted in SOLR-11692 and then I think the surrounding code change 
> more in SOLR-12290, but the TODO remained unaddressed. We can declare this as 
> HttpServletRequest/Response and all of the usages still work. There are 
> plenty of implementations where we just do a cast anyway, and don't even do 
> instanced checks.
> I noticed this change for an external auth plugin that I'm working on that 
> appears to have issues handling casts between ServletRequest and the 
> CloseShield wrapper classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9317) Resolve package name conflicts for StandardAnalyzer to allow Java module system support

2020-04-20 Thread David Ryan (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088196#comment-17088196
 ] 

David Ryan commented on LUCENE-9317:


Looks like we are getting closer. Thanks for the feedback. I will look at 
leaving oal.analysis.utils package in common analysis and move the factory 
classes into oal.analysis while leaving deprecated factory classes in util. 
Would you consider doing the same with CustomAnalyzer (ie move to to 
oal.analysis and leave a deprecated class in oal.analsysi.custom package)?

Once you're happy with the approach I'll focus on refactoring the 
META-INF/services, test case and ant/gradle build files. I'll make smaller 
commits on a branch for each change so each step can be seen and validated.

> Resolve package name conflicts for StandardAnalyzer to allow Java module 
> system support
> ---
>
> Key: LUCENE-9317
> URL: https://issues.apache.org/jira/browse/LUCENE-9317
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: master (9.0)
>Reporter: David Ryan
>Priority: Major
>  Labels: build, features
>
>  
> To allow Lucene to be modularised there are a few preparatory tasks to be 
> completed prior to this being possible.  The Java module system requires that 
> jars do not use the same package name in different jars.  The lucene-core and 
> lucene-analyzers-common both share the package 
> org.apache.lucene.analysis.standard.
> Possible resolutions to this issue are discussed by Uwe on the mailing list 
> here:
>  
> [http://mail-archives.apache.org/mod_mbox/lucene-dev/202004.mbox/%3CCAM21Rt8FHOq_JeUSELhsQJH0uN0eKBgduBQX4fQKxbs49TLqzA%40mail.gmail.com%3E]
> {quote}About StandardAnalyzer: Unfortunately I aggressively complained a 
> while back when Mike McCandless wanted to move standard analyzer out of the 
> analysis package into core (“for convenience”). This was a bad step, and IMHO 
> we should revert that or completely rename the packages and everything. The 
> problem here is: As the analysis services are only part of lucene-analyzers, 
> we had to leave the factory classes there, but move the implementation 
> classes in core. The package has to be the same. The only way around that is 
> to move the analysis factory framework also to core (I would not be against 
> that). This would include all factory base classes and the service loading 
> stuff. Then we can move standard analyzer and some of the filters/tokenizers 
> including their factories to core an that problem would be solved.
> {quote}
> There are two options here, either move factory framework into core or revert 
> StandardAnalyzer back to lucene-analyzers.  In the email, the solution lands 
> on reverting back as per the task list:
> {quote}Add some preparatory issues to cleanup class hierarchy: Move Analysis 
> SPI to core / remove StandardAnalyzer and related classes out of core back to 
> anaysis
> {quote}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7788) fail precommit on unparameterised log messages and examine for wasted work/objects

2020-04-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088195#comment-17088195
 ] 

ASF subversion and git services commented on LUCENE-7788:
-

Commit 2bb97928448690be156caa4d1cc0ea4cc9c5be47 in lucene-solr's branch 
refs/heads/branch_8x from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2bb9792 ]

LUCENE-7788: fail precommit on unparameterised log messages and examine for 
wasted work/objects


> fail precommit on unparameterised log messages and examine for wasted 
> work/objects
> --
>
> Key: LUCENE-7788
> URL: https://issues.apache.org/jira/browse/LUCENE-7788
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-7788.patch, LUCENE-7788.patch, gradle_only.patch, 
> gradle_only.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> SOLR-10415 would be removing existing unparameterised log.trace messages use 
> and once that is in place then this ticket's one-line change would be for 
> 'ant precommit' to reject any future unparameterised log.trace message use.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7788) fail precommit on unparameterised log messages and examine for wasted work/objects

2020-04-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088194#comment-17088194
 ] 

ASF subversion and git services commented on LUCENE-7788:
-

Commit c94770c2b9c00ccdc2d617d595d62f85a332dc0c in lucene-solr's branch 
refs/heads/master from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c94770c ]

LUCENE-7788: fail precommit on unparameterised log messages and examine for 
wasted work/objects


> fail precommit on unparameterised log messages and examine for wasted 
> work/objects
> --
>
> Key: LUCENE-7788
> URL: https://issues.apache.org/jira/browse/LUCENE-7788
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-7788.patch, LUCENE-7788.patch, gradle_only.patch, 
> gradle_only.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> SOLR-10415 would be removing existing unparameterised log.trace messages use 
> and once that is in place then this ticket's one-line change would be for 
> 'ant precommit' to reject any future unparameterised log.trace message use.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14420) Address AuthenticationPlugin TODO redeclare params as HttpServletRequest & HttpServletResponse

2020-04-20 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088150#comment-17088150
 ] 

Mike Drob commented on SOLR-14420:
--

[~dsmiley], [~uschindler] - since y'all were involved in the original 
conversations, would you mind taking a look at this? I couldn't find any reason 
why we needed to declare arguments as {{ServletRequest}} when we do blind casts 
in most implementations anyway, and this feels much safer.

> Address AuthenticationPlugin TODO redeclare params as HttpServletRequest & 
> HttpServletResponse
> --
>
> Key: SOLR-14420
> URL: https://issues.apache.org/jira/browse/SOLR-14420
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mike Drob
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This was noted in SOLR-11692 and then I think the surrounding code change 
> more in SOLR-12290, but the TODO remained unaddressed. We can declare this as 
> HttpServletRequest/Response and all of the usages still work. There are 
> plenty of implementations where we just do a cast anyway, and don't even do 
> instanced checks.
> I noticed this change for an external auth plugin that I'm working on that 
> appears to have issues handling casts between ServletRequest and the 
> CloseShield wrapper classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob opened a new pull request #1442: SOLR-14420 Declare ServletRequests as HttpRequests

2020-04-20 Thread GitBox


madrob opened a new pull request #1442:
URL: https://github.com/apache/lucene-solr/pull/1442


   https://issues.apache.org/jira/browse/SOLR-14420
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14420) Address AuthenticationPlugin TODO redeclare params as HttpServletRequest & HttpServletResponse

2020-04-20 Thread Mike Drob (Jira)
Mike Drob created SOLR-14420:


 Summary: Address AuthenticationPlugin TODO redeclare params as 
HttpServletRequest & HttpServletResponse
 Key: SOLR-14420
 URL: https://issues.apache.org/jira/browse/SOLR-14420
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: security
Reporter: Mike Drob


This was noted in SOLR-11692 and then I think the surrounding code change more 
in SOLR-12290, but the TODO remained unaddressed. We can declare this as 
HttpServletRequest/Response and all of the usages still work. There are plenty 
of implementations where we just do a cast anyway, and don't even do instanced 
checks.

I noticed this change for an external auth plugin that I'm working on that 
appears to have issues handling casts between ServletRequest and the 
CloseShield wrapper classes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-9321) Port documentation task to gradle

2020-04-20 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088129#comment-17088129
 ] 

Uwe Schindler edited comment on LUCENE-9321 at 4/20/20, 10:45 PM:
--

Hi,
I can look at this. The idea of that piecie is just to generate the linklist. 
XSL is fine for that, if you prefer something else, it's fine. Analysis of what 
it's doing (I wrote party of it long ago, but some stuff was added by others):

- the first step is collecting all build files, filters some of them, converts 
them to URL and concats all those URL using "|". The result is a property
- the second step wasn't added by me: It is more or less only grepping the 
Default Codec out of Codec.java. I think that should be some one-liner with 
groovy: Load file, apply regex, return result. I would not port that using Ant. 
Most stuff like this is so complicated in ant, because you are very limited in 
what you can do. All this resouce/filtering is more or less way too much work 
to do. A single line of Groovy can do this most of the times. 
- The XSL is pain simple, the interesting part is only that it takes the 
parameters from the first 2 steps as input and generates the HTML out of it. 
Can be easily ported using Groovy code (Groovy knows XSL, too)
- The last step is converting some markdown files to HTML. I have no idea if 
there's a plugin available to do that. It's basically a macro that uses the 
copy task to copy some markdown file (input) to an output file and converts it 
to HTML.

To test this in isolation, just run {{ant clean process-webpages}} and look 
into build/docs folder.

My plan:
- Instead of colecting build.xml files, just ask Gradle for all projects and 
filter them. The later scripts just need to get the directory names relative to 
the docs root folder of all modules to create the links
- The extraction of codec should be a one-liner: open file, read as string, 
apply regex, assign result to Groovy variable
- The XSL step could maybe replaced by generating a temporary generated 
markdown "overview" file with a list of all subdirectories, the title and the 
default codec.
- Use some Gradle task to convert all markdown input files (including the one 
generated previously) to HTML.

If you have some hints where to place the task and if there's a Markdown->HTML 
converter readily available for Gradle, I'd happy to code it :-) For Maven 
there's a plugin to do that, I use it quite often to generate documentation.


was (Author: thetaphi):
Hi,
I can look at this. The idea of that piecie is just to generate the linklist. 
XSL is fine for that, if you prefer something else, it's fine. Analysis of what 
it's doing (I wrote party of it long ago, but some stuff was added by others):

- the first step is collecting all build files, filters some of them, converts 
them to URL and concats all those URL using "|". The result is a property
- the second step wasn't added by me: It is more or less only grepping the 
Default Codec out of Codec.java. I think that should be some one-liner with 
groovy: Load file, apply regex, return result. I would not port that using Ant. 
Most stuff like this is so complicated in ant, because you are very limited in 
what you can do. All this resouce/filtering is more or less way too much work 
to do. A single line of Groovy can do this most of the times. 
- The XSL is pain simple, the interesting part is only that it takes the 
parameters from the first 2 steps as input and generates the HTML out of it. 
Can be easily ported using Groovy code (Groovy knows XSL, too)
- The last step is converting some markdown files to HTML. I have no idea if 
there's a plugin available to do that. It's basically a macro that uses the 
copy task to copy some markdown file (input) to an output file and converts it 
to HTML.

To test this, jsut run "ant process-webpages" and look into build/docs folder.

My plan:
- Instead of colecting build.xml files, just ask Gradle for all projects and 
filter them. The later scripts just need to get the directory names relative to 
the docs root folder of all modules to create the links
- The extraction of codec should be a one-liner: open file, read as string, 
apply regex, assign result to Groovy variable
- The XSL step could maybe replaced by generating a temporary generated 
markdown "overview" file with a list of all subdirectories, the title and the 
default codec.
- Use some Gradle task to convert all markdown input files (including the one 
generated previously) to HTML.

If you have some hints where to place the task and if there's a Markdown->HTML 
converter readily available for Gradle, I'd happy to code it :-) For Maven 
there's a plugin to do that, I use it quite often to generate documentation.

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: 

[jira] [Commented] (SOLR-14105) Http2SolrClient SSL not working in branch_8x

2020-04-20 Thread Akhmad Amirov (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088144#comment-17088144
 ] 

Akhmad Amirov commented on SOLR-14105:
--

in what version it is resolved? I have the below error in 8.5.1:

2020-04-20 17:41:05.841 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: Error instantiating 
shardHandlerFactory class [HttpShardHandlerFactory]: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server2020-04-20 17:41:05.841 ERROR (main) [   ] o.a.s.c.SolrCore 
null:org.apache.solr.common.SolrException: Error instantiating 
shardHandlerFactory class [HttpShardHandlerFactory]: 
java.lang.UnsupportedOperationException: X509ExtendedKeyManager only supported 
on Server at 
org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
 at org.apache.solr.core.CoreContainer.load(CoreContainer.java:647) at o

.

org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
 at 
org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
 at 
org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:117)
 at org.eclipse.jetty.client.HttpClient.doStart(HttpClient.java:244) at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
 at 
org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:221)
 ... 54 more

> Http2SolrClient SSL not working in branch_8x
> 
>
> Key: SOLR-14105
> URL: https://issues.apache.org/jira/browse/SOLR-14105
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Jan Høydahl
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-14105.patch
>
>
> In branch_8x we upgraded to Jetty 9.4.24. This causes the following 
> exceptions when attempting to start server with SSL:
> {noformat}
> 2019-12-17 14:46:16.646 ERROR (main) [   ] o.a.s.c.SolrCore 
> null:org.apache.solr.common.SolrException: Error instantiating 
> shardHandlerFactory class [HttpShardHandlerFactory]: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:56)
>   at org.apache.solr.core.CoreContainer.load(CoreContainer.java:633)
> ...
> Caused by: java.lang.RuntimeException: 
> java.lang.UnsupportedOperationException: X509ExtendedKeyManager only 
> supported on Server
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.createHttpClient(Http2SolrClient.java:224)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient.(Http2SolrClient.java:154)
>   at 
> org.apache.solr.client.solrj.impl.Http2SolrClient$Builder.build(Http2SolrClient.java:833)
>   at 
> org.apache.solr.handler.component.HttpShardHandlerFactory.init(HttpShardHandlerFactory.java:321)
>   at 
> org.apache.solr.handler.component.ShardHandlerFactory.newInstance(ShardHandlerFactory.java:51)
>   ... 50 more
> Caused by: java.lang.UnsupportedOperationException: X509ExtendedKeyManager 
> only supported on Server
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.newSniX509ExtendedKeyManager(SslContextFactory.java:1273)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.getKeyManagers(SslContextFactory.java:1255)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.load(SslContextFactory.java:374)
>   at 
> org.eclipse.jetty.util.ssl.SslContextFactory.doStart(SslContextFactory.java:245)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14419) Query DLS paramter $reference

2020-04-20 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088141#comment-17088141
 ] 

Mikhail Khludnev commented on SOLR-14419:
-

The fix is obvious, but is there any concerns [~caomanhdat]? 

> Query DLS paramter $reference
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Priority: Major
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":"$prnts", "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14419) Query DLS paramter $reference

2020-04-20 Thread Mikhail Khludnev (Jira)
Mikhail Khludnev created SOLR-14419:
---

 Summary: Query DLS paramter $reference
 Key: SOLR-14419
 URL: https://issues.apache.org/jira/browse/SOLR-14419
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: JSON Request API
Reporter: Mikhail Khludnev


What we can do with plain params: 
{{q=\{!parent which=$prnts}...=type:parent}}
obviously I want to have something like this in Query DSL:
{code}
{ "query": { "parents":{ "which":"$prnts", "query":"..."}}
  "params": {
  "prnts":"type:parent"
   }
}
{code} 





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14418) MetricsHistoryHandler: don't query .system collection

2020-04-20 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-14418:

Description: 
The MetricsHistoryHandler queries the .system collection on startup with a 
match-all-docs query just to see that it exists.  I think we should do this 
without actually issuing a query, like use ClusterState.  Ideally 
SolrClient.getCollections(regexp) would exist but alas.  In CloudSolrClient, 
it'd work via cluster state.  No query needed!

It's death by a thousand cuts in our Solr tests.  In tests, I wish Solr 
features like this were opt-in.
Environment: (was: The MetricsHistoryHandler queries the .system 
collection on startup with a match-all-docs query just to see that it exists.  
I think we should do this without actually issuing a query, like use 
ClusterState.  Ideally SolrClient.getCollections(regexp) would exist but alas.  
In CloudSolrClient, it'd work via cluster state.  No query needed!

It's death by a thousand cuts in our Solr tests.  In tests, I wish Solr 
features like this were opt-in.)

> MetricsHistoryHandler: don't query .system collection
> -
>
> Key: SOLR-14418
> URL: https://issues.apache.org/jira/browse/SOLR-14418
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: David Smiley
>Priority: Minor
>
> The MetricsHistoryHandler queries the .system collection on startup with a 
> match-all-docs query just to see that it exists.  I think we should do this 
> without actually issuing a query, like use ClusterState.  Ideally 
> SolrClient.getCollections(regexp) would exist but alas.  In CloudSolrClient, 
> it'd work via cluster state.  No query needed!
> It's death by a thousand cuts in our Solr tests.  In tests, I wish Solr 
> features like this were opt-in.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14418) MetricsHistoryHandler: don't query .system collection

2020-04-20 Thread David Smiley (Jira)
David Smiley created SOLR-14418:
---

 Summary: MetricsHistoryHandler: don't query .system collection
 Key: SOLR-14418
 URL: https://issues.apache.org/jira/browse/SOLR-14418
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics
 Environment: The MetricsHistoryHandler queries the .system collection 
on startup with a match-all-docs query just to see that it exists.  I think we 
should do this without actually issuing a query, like use ClusterState.  
Ideally SolrClient.getCollections(regexp) would exist but alas.  In 
CloudSolrClient, it'd work via cluster state.  No query needed!

It's death by a thousand cuts in our Solr tests.  In tests, I wish Solr 
features like this were opt-in.
Reporter: David Smiley






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-9321) Port documentation task to gradle

2020-04-20 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088129#comment-17088129
 ] 

Uwe Schindler edited comment on LUCENE-9321 at 4/20/20, 10:25 PM:
--

Hi,
I can look at this. The idea of that piecie is just to generate the linklist. 
XSL is fine for that, if you prefer something else, it's fine. Analysis of what 
it's doing (I wrote party of it long ago, but some stuff was added by others):

- the first step is collecting all build files, filters some of them, converts 
them to URL and concats all those URL using "|". The result is a property
- the second step wasn't added by me: It is more or less only grepping the 
Default Codec out of Codec.java. I think that should be some one-liner with 
groovy: Load file, apply regex, return result. I would not port that using Ant. 
Most stuff like this is so complicated in ant, because you are very limited in 
what you can do. All this resouce/filtering is more or less way too much work 
to do. A single line of Groovy can do this most of the times. 
- The XSL is pain simple, the interesting part is only that it takes the 
parameters from the first 2 steps as input and generates the HTML out of it. 
Can be easily ported using Groovy code (Groovy knows XSL, too)
- The last step is converting some markdown files to HTML. I have no idea if 
there's a plugin available to do that. It's basically a macro that uses the 
copy task to copy some markdown file (input) to an output file and converts it 
to HTML.

To test this, jsut run "ant process-webpages" and look into build/docs folder.

My plan:
- Instead of colecting build.xml files, just ask Gradle for all projects and 
filter them. The later scripts just need to get the directory names relative to 
the docs root folder of all modules to create the links
- The extraction of codec should be a one-liner: open file, read as string, 
apply regex, assign result to Groovy variable
- The XSL step could maybe replaced by generating a temporary generated 
markdown "overview" file with a list of all subdirectories, the title and the 
default codec.
- Use some Gradle task to convert all markdown input files (including the one 
generated previously) to HTML.

If you have some hints where to place the task and if there's a Markdown->HTML 
converter readily available for Gradle, I'd happy to code it :-) For Maven 
there's a plugin to do that, I use it quite often to generate documentation.


was (Author: thetaphi):
Hi,
I can look at this. The idea of that piecie is just to generate the linklist. 
XSL is fine for that, if you prefer something else, it's fine. I have the 
following plan:

- the first step is collecting all build files, filters some of them, converts 
them to URL and concats all those URL using "|". The result is a property
- the second step wasn't added by me: It is more or less only grepping the 
Default Codec out of Codec.java. I think that should be some one-liner with 
groovy: Load file, apply regex, return result. I would not port that using Ant. 
Most stuff like this is so complicated in ant, because you are very limited in 
what you can do. All this resouce/filtering is more or less way too much work 
to do. A single line of Groovy can do this most of the times. 
- The XSL is pain simple, the interesting part is only that it takes the 
parameters from the first 2 steps as input and generates the HTML out of it. 
Can be easily ported using Groovy code (Groovy knows XSL, too)
- The last step is converting some markdown files to HTML. I have no idea if 
there's a plugin available to do that. It's basically a macro that uses the 
copy task to copy some markdown file (input) to an output file and converts it 
to HTML.

To test this, jsut run "ant process-webpages" and look into build/docs folder.

My plan:
- Instead of colecting build.xml files, just ask Gradle for all projects and 
filter them. The later scripts just need to get the directory names relative to 
the docs root folder of all modules to create the links
- The extraction of codec should be a one-liner: open file, read as string, 
apply regex, assign result to Groovy variable
- The XSL step could maybe replaced by generating a temporary generated 
markdown "overview" file with a list of all subdirectories, the title and the 
default codec.
- Use some Gradle task to convert all markdown input files (including the one 
generated previously) to HTML.

If you have some hints where to place the task and if there's a Markdown->HTML 
converter readily available for Gradle, I'd happy to code it :-) For Maven 
there's a plugin to do that, I use it quite often to generate documentation.

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: 

[jira] [Commented] (SOLR-13030) MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks?

2020-04-20 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088133#comment-17088133
 ] 

David Smiley commented on SOLR-13030:
-

Shouldn't this be Closed for 8.0?

> MetricsHistoryHandler appears to never be "close()ed" - causes thread leaks?
> 
>
> Key: SOLR-13030
> URL: https://issues.apache.org/jira/browse/SOLR-13030
> Project: Solr
>  Issue Type: Bug
>Reporter: Chris M. Hostetter
>Assignee: Mark Miller
>Priority: Major
>
> {{MetricsHistoryHandler}} implements {{Closeable}} and depends on it's 
> {{close()}} method to shutdown a {{private ScheduledThreadPoolExecutor 
> collectService}} as well as a {{private final SolrRrdBackendFactory factory}} 
> (which maintains it's own {{private ScheduledThreadPoolExecutor}} but -as far 
> as i can tell nothing seems to every "close" this {{MetricsHistoryHandler}} 
> on shutdown.- after changes in 75b1831967982 which move this close() call to 
> a {{ForkJoinPool}} in {{CoreContainer.shutdown()}} it seems to be leaking 
> threads.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-9321) Port documentation task to gradle

2020-04-20 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088129#comment-17088129
 ] 

Uwe Schindler edited comment on LUCENE-9321 at 4/20/20, 10:13 PM:
--

Hi,
I can look at this. The idea of that piecie is just to generate the linklist. 
XSL is fine for that, if you prefer something else, it's fine. I have the 
following plan:

- the first step is collecting all build files, filters some of them, converts 
them to URL and concats all those URL using "|". The result is a property
- the second step wasn't added by me: It is more or less only grepping the 
Default Codec out of Codec.java. I think that should be some one-liner with 
groovy: Load file, apply regex, return result. I would not port that using Ant. 
Most stuff like this is so complicated in ant, because you are very limited in 
what you can do. All this resouce/filtering is more or less way too much work 
to do. A single line of Groovy can do this most of the times. 
- The XSL is pain simple, the interesting part is only that it takes the 
parameters from the first 2 steps as input and generates the HTML out of it. 
Can be easily ported using Groovy code (Groovy knows XSL, too)
- The last step is converting some markdown files to HTML. I have no idea if 
there's a plugin available to do that. It's basically a macro that uses the 
copy task to copy some markdown file (input) to an output file and converts it 
to HTML.

To test this, jsut run "ant process-webpages" and look into build/docs folder.

My plan:
- Instead of colecting build.xml files, just ask Gradle for all projects and 
filter them. The later scripts just need to get the directory names relative to 
the docs root folder of all modules to create the links
- The extraction of codec should be a one-liner: open file, read as string, 
apply regex, assign result to Groovy variable
- The XSL step could maybe replaced by generating a temporary generated 
markdown "overview" file with a list of all subdirectories, the title and the 
default codec.
- Use some Gradle task to convert all markdown input files (including the one 
generated previously) to HTML.

If you have some hints where to place the task and if there's a Markdown->HTML 
converter readily available for Gradle, I'd happy to code it :-) For Maven 
there's a plugin to do that, I use it quite often to generate documentation.


was (Author: thetaphi):
Hi,
I can look at this. The idea of that piecie is just to generate the linklist. 
XSL is fine for that, if you prefer something else, it's fine. I have the 
following plan:

- the first step is collecting all build files, filters some of them, converts 
them to URL and concats all those URL using "|". The result is a property
- the second step wasn't added by me: It is more or less only grepping the 
Default Codec out of Codec.java. I think that should be some one-liner with 
groovy: Load file, apply regex, return result. I would not port that using Ant. 
Most stuff like this is so complicated in ant, because you are very limited in 
what you can do. All this resouce/filtering is more or less way too much work 
to do. A single line of Groovy can do this most of the times. 
- The XSL is pain simple, the interesting part is only that it takes the 
parameters from the first 2 steps as input and generates the HTML out of it. 
Can be easily ported using Groovy code (Groovy knows XSL, too)
- The last step is converting some markdown files to HTML. I have no idea if 
there's a plugin available to do that. It's basically a macro that uses the 
copy task to copy some markdown file (input) to an output file and converts it 
to HTML.

To test this, jsut run "ant process-webpages" and look into build/docs folder.

My plan:
- Instead of colecting build.xml files, just ask Gradle for all projects and 
filter them. The later scripts just need to get the directory names relative to 
the docs root folder of all modules to create the links
- The extraction of codec should be a one-liner: open file, read as string, 
apply regex, assign result to Groovy variable
- The XSL step could maybe replaced by generating a temporary generated 
markdown "overview" file with a list of all subdirectories, the title and the 
default codec.
- Use some Gradle task to convert all makrdown input files (including the one 
generated previously) to HTML.

If you have some hints where to place the task and if there's a Markdown->HTL 
converter readily available for Gradle, I'd happy to code it :-) For Maven 
there's a plugin to do that, I use it quite often to generate documentation.

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: 

[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-04-20 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088129#comment-17088129
 ] 

Uwe Schindler commented on LUCENE-9321:
---

Hi,
I can look at this. The idea of that piecie is just to generate the linklist. 
XSL is fine for that, if you prefer something else, it's fine. I have the 
following plan:

- the first step is collecting all build files, filters some of them, converts 
them to URL and concats all those URL using "|". The result is a property
- the second step wasn't added by me: It is more or less only grepping the 
Default Codec out of Codec.java. I think that should be some one-liner with 
groovy: Load file, apply regex, return result. I would not port that using Ant. 
Most stuff like this is so complicated in ant, because you are very limited in 
what you can do. All this resouce/filtering is more or less way too much work 
to do. A single line of Groovy can do this most of the times. 
- The XSL is pain simple, the interesting part is only that it takes the 
parameters from the first 2 steps as input and generates the HTML out of it. 
Can be easily ported using Groovy code (Groovy knows XSL, too)
- The last step is converting some markdown files to HTML. I have no idea if 
there's a plugin available to do that. It's basically a macro that uses the 
copy task to copy some markdown file (input) to an output file and converts it 
to HTML.

To test this, jsut run "ant process-webpages" and look into build/docs folder.

My plan:
- Instead of colecting build.xml files, just ask Gradle for all projects and 
filter them. The later scripts just need to get the directory names relative to 
the docs root folder of all modules to create the links
- The extraction of codec should be a one-liner: open file, read as string, 
apply regex, assign result to Groovy variable
- The XSL step could maybe replaced by generating a temporary generated 
markdown "overview" file with a list of all subdirectories, the title and the 
default codec.
- Use some Gradle task to convert all makrdown input files (including the one 
generated previously) to HTML.

If you have some hints where to place the task and if there's a Markdown->HTL 
converter readily available for Gradle, I'd happy to code it :-) For Maven 
there's a plugin to do that, I use it quite often to generate documentation.

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Major
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14417) Gradle build sometimes fails RE BlockPoolSlice

2020-04-20 Thread Mike Drob (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob updated SOLR-14417:
-
Description: 
There seems to be some package visibility hacks around our Hdfs integration:

{{/Users/dsmiley/SearchDev/lucene-solr/solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java:125:
 error: BlockPoolSlice is not public in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl; cannot be accessed from 
outside package}}
{{List> modifiedHadoopClasses = Arrays.asList(BlockPoolSlice.class, 
DiskChecker.class,}}

This happens on my Gradle build when running {{gradlew testClasses}} (i.e. to 
compile tests) but Ant proceeded without issue.  The work-around is to run 
{{gradlew clean}} first but really I want our build to be smarter here.

CC [~krisden]

  was:
There seems to be some package visibility hacks around our Hdfs integration:

{{/Users/dsmiley/SearchDev/lucene-solr/solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java:125:
 error: BlockPoolSlice is not public in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl; cannot be accessed from 
outside package
{{ {{ List> modifiedHadoopClasses = 
Arrays.asList(BlockPoolSlice.class, DiskChecker.class,}}

This happens on my Gradle build when running {{gradlew testClasses}} (i.e. to 
compile tests) but Ant proceeded without issue.  The work-around is to run 
{{gradlew clean}} first but really I want our build to be smarter here.

CC [~krisden]


> Gradle build sometimes fails RE BlockPoolSlice
> --
>
> Key: SOLR-14417
> URL: https://issues.apache.org/jira/browse/SOLR-14417
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: David Smiley
>Priority: Minor
>
> There seems to be some package visibility hacks around our Hdfs integration:
> {{/Users/dsmiley/SearchDev/lucene-solr/solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java:125:
>  error: BlockPoolSlice is not public in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl; cannot be accessed 
> from outside package}}
> {{List> modifiedHadoopClasses = Arrays.asList(BlockPoolSlice.class, 
> DiskChecker.class,}}
> This happens on my Gradle build when running {{gradlew testClasses}} (i.e. to 
> compile tests) but Ant proceeded without issue.  The work-around is to run 
> {{gradlew clean}} first but really I want our build to be smarter here.
> CC [~krisden]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14417) Gradle build sometimes fails RE BlockPoolSlice

2020-04-20 Thread David Smiley (Jira)
David Smiley created SOLR-14417:
---

 Summary: Gradle build sometimes fails RE BlockPoolSlice
 Key: SOLR-14417
 URL: https://issues.apache.org/jira/browse/SOLR-14417
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Build
Reporter: David Smiley


There seems to be some package visibility hacks around our Hdfs integration:

{{/Users/dsmiley/SearchDev/lucene-solr/solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java:125:
 error: BlockPoolSlice is not public in 
org.apache.hadoop.hdfs.server.datanode.fsdataset.impl; cannot be accessed from 
outside package
{{ {{ List> modifiedHadoopClasses = 
Arrays.asList(BlockPoolSlice.class, DiskChecker.class,}}

This happens on my Gradle build when running {{gradlew testClasses}} (i.e. to 
compile tests) but Ant proceeded without issue.  The work-around is to run 
{{gradlew clean}} first but really I want our build to be smarter here.

CC [~krisden]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8394) Luke handler doesn't support FilterLeafReader

2020-04-20 Thread Isabelle Giguere (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646656#comment-16646656
 ] 

Isabelle Giguere edited comment on SOLR-8394 at 4/20/20, 8:28 PM:
--

SOLR-8394_tag_7.5.0.patch : Same patch, on revision 61870, tag 7.5.0, latest 
release
*** patch deleted ***

Simple test:
http://localhost:8983/solr/all/admin/luke?wt=xml
- without the patch : -1
-- -1 is the default return value !
- fixed by the patch : 299034



was (Author: igiguere):
SOLR-8394_tag_7.5.0.patch : Same patch, on revision 61870, tag 7.5.0, latest 
release

Simple test:
http://localhost:8983/solr/all/admin/luke?wt=xml
- without the patch : -1
-- -1 is the default return value !
- fixed by the patch : 299034


> Luke handler doesn't support FilterLeafReader
> -
>
> Key: SOLR-8394
> URL: https://issues.apache.org/jira/browse/SOLR-8394
> Project: Solr
>  Issue Type: Improvement
>Reporter: Steve Molloy
>Priority: Major
> Attachments: SOLR-8394.patch, SOLR-8394.patch, SOLR-8394.patch
>
>
> When fetching index information, luke handler only looks at ramBytesUsed for 
> SegmentReader leaves. If these readers are wrapped in FilterLeafReader, no 
> RAM usage is returned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8394) Luke handler doesn't support FilterLeafReader

2020-04-20 Thread Isabelle Giguere (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16646656#comment-16646656
 ] 

Isabelle Giguere edited comment on SOLR-8394 at 4/20/20, 8:28 PM:
--

SOLR-8394_tag_7.5.0.patch : Same patch, on revision 61870, tag 7.5.0, latest 
release
*patch deleted*

Simple test:
http://localhost:8983/solr/all/admin/luke?wt=xml
- without the patch : -1
-- -1 is the default return value !
- fixed by the patch : 299034



was (Author: igiguere):
SOLR-8394_tag_7.5.0.patch : Same patch, on revision 61870, tag 7.5.0, latest 
release
*** patch deleted ***

Simple test:
http://localhost:8983/solr/all/admin/luke?wt=xml
- without the patch : -1
-- -1 is the default return value !
- fixed by the patch : 299034


> Luke handler doesn't support FilterLeafReader
> -
>
> Key: SOLR-8394
> URL: https://issues.apache.org/jira/browse/SOLR-8394
> Project: Solr
>  Issue Type: Improvement
>Reporter: Steve Molloy
>Priority: Major
> Attachments: SOLR-8394.patch, SOLR-8394.patch, SOLR-8394.patch
>
>
> When fetching index information, luke handler only looks at ramBytesUsed for 
> SegmentReader leaves. If these readers are wrapped in FilterLeafReader, no 
> RAM usage is returned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-8394) Luke handler doesn't support FilterLeafReader

2020-04-20 Thread Isabelle Giguere (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Isabelle Giguere updated SOLR-8394:
---
Attachment: (was: SOLR-8394_tag_7.5.0.patch)

> Luke handler doesn't support FilterLeafReader
> -
>
> Key: SOLR-8394
> URL: https://issues.apache.org/jira/browse/SOLR-8394
> Project: Solr
>  Issue Type: Improvement
>Reporter: Steve Molloy
>Priority: Major
> Attachments: SOLR-8394.patch, SOLR-8394.patch, SOLR-8394.patch
>
>
> When fetching index information, luke handler only looks at ramBytesUsed for 
> SegmentReader leaves. If these readers are wrapped in FilterLeafReader, no 
> RAM usage is returned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-8394) Luke handler doesn't support FilterLeafReader

2020-04-20 Thread Isabelle Giguere (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Isabelle Giguere updated SOLR-8394:
---
Attachment: SOLR-8394.patch

> Luke handler doesn't support FilterLeafReader
> -
>
> Key: SOLR-8394
> URL: https://issues.apache.org/jira/browse/SOLR-8394
> Project: Solr
>  Issue Type: Improvement
>Reporter: Steve Molloy
>Priority: Major
> Attachments: SOLR-8394.patch, SOLR-8394.patch, SOLR-8394.patch, 
> SOLR-8394_tag_7.5.0.patch
>
>
> When fetching index information, luke handler only looks at ramBytesUsed for 
> SegmentReader leaves. If these readers are wrapped in FilterLeafReader, no 
> RAM usage is returned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-8394) Luke handler doesn't support FilterLeafReader

2020-04-20 Thread Isabelle Giguere (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088061#comment-17088061
 ] 

Isabelle Giguere commented on SOLR-8394:


New patch, off Git master.  Includes the unit test mentioned in the previous 
comment. 

> Luke handler doesn't support FilterLeafReader
> -
>
> Key: SOLR-8394
> URL: https://issues.apache.org/jira/browse/SOLR-8394
> Project: Solr
>  Issue Type: Improvement
>Reporter: Steve Molloy
>Priority: Major
> Attachments: SOLR-8394.patch, SOLR-8394.patch, SOLR-8394.patch, 
> SOLR-8394_tag_7.5.0.patch
>
>
> When fetching index information, luke handler only looks at ramBytesUsed for 
> SegmentReader leaves. If these readers are wrapped in FilterLeafReader, no 
> RAM usage is returned.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-04-20 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17088034#comment-17088034
 ] 

Dawid Weiss commented on LUCENE-9321:
-

When you scan for *.xsl there are literally two files there: site/index.xsl 
(and online-link.xsl). I think most of the XSLT processing is to substitute 
arguments:
{code}
  
  
  
  
{code}

and then process certain XML files passed in {{buildfiles}}.

I'm pretty sure it can be done from gradle... a good question is whether it has 
to be done with xslt which smells only marginally newer than cobol :) 

If you can leave it out I may take a look; can't promise a timeline because the 
world is fairly crazy at the moment.

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Major
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss commented on a change in pull request #1441: LUCENE-9332 validate source patterns using gradle

2020-04-20 Thread GitBox


dweiss commented on a change in pull request #1441:
URL: https://github.com/apache/lucene-solr/pull/1441#discussion_r411632375



##
File path: gradle/validation/validate-source-patterns.gradle
##
@@ -1,3 +1,5 @@
+import org.gradle.plugins.ide.eclipse.model.Output

Review comment:
   Huh? Why Eclipse IDE import?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14412) NPE in MetricsHistoryHandler when running single node in cloud mode with SSL

2020-04-20 Thread Mike Drob (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Drob resolved SOLR-14412.
--
Fix Version/s: master (9.0)
 Assignee: Mike Drob
   Resolution: Fixed

> NPE in MetricsHistoryHandler when running single node in cloud mode with SSL
> 
>
> Key: SOLR-14412
> URL: https://issues.apache.org/jira/browse/SOLR-14412
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (9.0)
> Environment: 9.0.0-SNAPSHOT, SSL enabled (self-signed certificate), 
> cloud mode.
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
> Fix For: master (9.0)
>
>
> {noformat}
> 2020-04-17 15:35:19.335 INFO  (MetricsHistoryHandler-22-thread-1) [   ] 
> o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
> again: IOException occurred when talking to server at: 
> http://localhost:8983/solr
> 2020-04-17 15:35:19.842 INFO  (MetricsHistoryHandler-22-thread-1) [   ] 
> o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
> again: IOException occurred when talking to server at: 
> http://localhost:8983/solr
> 2020-04-17 15:35:20.346 INFO  (MetricsHistoryHandler-22-thread-1) [   ] 
> o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
> again: IOException occurred when talking to server at: 
> http://localhost:8983/solr
> 2020-04-17 15:35:20.851 WARN  (MetricsHistoryHandler-22-thread-1) [   ] 
> o.a.s.c.s.i.SolrClientNodeStateProvider could not get tags from node 
> localhost:8983_solr => java.lang.NullPointerException
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.lambda$fetchReplicaMetrics$7(SolrClientNodeStateProvider.java:226)
> java.lang.NullPointerException: null
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.lambda$fetchReplicaMetrics$7(SolrClientNodeStateProvider.java:226)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at java.util.HashMap.forEach(HashMap.java:1336) ~[?:?]
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:225)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:271)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:75)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchTagValues(SolrClientNodeStateProvider.java:139)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:128)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:506)
>  ~[solr-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:378)
>  ~[solr-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:235)
>  ~[solr-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
> ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
>  ~[?:?]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
>  ~[?:?]
> at 

[jira] [Commented] (SOLR-14412) NPE in MetricsHistoryHandler when running single node in cloud mode with SSL

2020-04-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087973#comment-17087973
 ] 

ASF subversion and git services commented on SOLR-14412:


Commit 58f9c79c6d2bc6ae57e823dff071dd68a72d8956 in lucene-solr's branch 
refs/heads/master from Mike Drob
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=58f9c79 ]

SOLR-14412 zkRun+https (#1437)

SOLR-14412

Check for results after retries failed in SolrClientNodeStateProvider
Set urlScheme=https with zkRun

> NPE in MetricsHistoryHandler when running single node in cloud mode with SSL
> 
>
> Key: SOLR-14412
> URL: https://issues.apache.org/jira/browse/SOLR-14412
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (9.0)
> Environment: 9.0.0-SNAPSHOT, SSL enabled (self-signed certificate), 
> cloud mode.
>Reporter: Mike Drob
>Priority: Major
>
> {noformat}
> 2020-04-17 15:35:19.335 INFO  (MetricsHistoryHandler-22-thread-1) [   ] 
> o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
> again: IOException occurred when talking to server at: 
> http://localhost:8983/solr
> 2020-04-17 15:35:19.842 INFO  (MetricsHistoryHandler-22-thread-1) [   ] 
> o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
> again: IOException occurred when talking to server at: 
> http://localhost:8983/solr
> 2020-04-17 15:35:20.346 INFO  (MetricsHistoryHandler-22-thread-1) [   ] 
> o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
> again: IOException occurred when talking to server at: 
> http://localhost:8983/solr
> 2020-04-17 15:35:20.851 WARN  (MetricsHistoryHandler-22-thread-1) [   ] 
> o.a.s.c.s.i.SolrClientNodeStateProvider could not get tags from node 
> localhost:8983_solr => java.lang.NullPointerException
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.lambda$fetchReplicaMetrics$7(SolrClientNodeStateProvider.java:226)
> java.lang.NullPointerException: null
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.lambda$fetchReplicaMetrics$7(SolrClientNodeStateProvider.java:226)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at java.util.HashMap.forEach(HashMap.java:1336) ~[?:?]
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:225)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:271)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:75)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchTagValues(SolrClientNodeStateProvider.java:139)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:128)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:506)
>  ~[solr-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:378)
>  ~[solr-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:235)
>  ~[solr-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
> ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  ~[?:?]
> at 

[jira] [Commented] (SOLR-14412) NPE in MetricsHistoryHandler when running single node in cloud mode with SSL

2020-04-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087972#comment-17087972
 ] 

ASF subversion and git services commented on SOLR-14412:


Commit 58f9c79c6d2bc6ae57e823dff071dd68a72d8956 in lucene-solr's branch 
refs/heads/master from Mike Drob
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=58f9c79 ]

SOLR-14412 zkRun+https (#1437)

SOLR-14412

Check for results after retries failed in SolrClientNodeStateProvider
Set urlScheme=https with zkRun

> NPE in MetricsHistoryHandler when running single node in cloud mode with SSL
> 
>
> Key: SOLR-14412
> URL: https://issues.apache.org/jira/browse/SOLR-14412
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: master (9.0)
> Environment: 9.0.0-SNAPSHOT, SSL enabled (self-signed certificate), 
> cloud mode.
>Reporter: Mike Drob
>Priority: Major
>
> {noformat}
> 2020-04-17 15:35:19.335 INFO  (MetricsHistoryHandler-22-thread-1) [   ] 
> o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
> again: IOException occurred when talking to server at: 
> http://localhost:8983/solr
> 2020-04-17 15:35:19.842 INFO  (MetricsHistoryHandler-22-thread-1) [   ] 
> o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
> again: IOException occurred when talking to server at: 
> http://localhost:8983/solr
> 2020-04-17 15:35:20.346 INFO  (MetricsHistoryHandler-22-thread-1) [   ] 
> o.a.s.c.s.i.SolrClientNodeStateProvider Error on getting remote info, trying 
> again: IOException occurred when talking to server at: 
> http://localhost:8983/solr
> 2020-04-17 15:35:20.851 WARN  (MetricsHistoryHandler-22-thread-1) [   ] 
> o.a.s.c.s.i.SolrClientNodeStateProvider could not get tags from node 
> localhost:8983_solr => java.lang.NullPointerException
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.lambda$fetchReplicaMetrics$7(SolrClientNodeStateProvider.java:226)
> java.lang.NullPointerException: null
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.lambda$fetchReplicaMetrics$7(SolrClientNodeStateProvider.java:226)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at java.util.HashMap.forEach(HashMap.java:1336) ~[?:?]
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchReplicaMetrics(SolrClientNodeStateProvider.java:225)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:271)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:75)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchTagValues(SolrClientNodeStateProvider.java:139)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:128)
>  ~[solr-solrj-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:506)
>  ~[solr-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:378)
>  ~[solr-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:235)
>  ~[solr-core-9.0.0-SNAPSHOT.jar:9.0.0-SNAPSHOT 
> 74ecc13816fb6aae6e512e2e9d815459e235a120 - mdrob - 2020-04-17 10:28:44]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) ~[?:?]
> at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
> ~[?:?]
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  ~[?:?]
> at 

[jira] [Resolved] (LUCENE-9273) Speed up geometry queries by specialising Component2D spatial operations

2020-04-20 Thread Ignacio Vera (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera resolved LUCENE-9273.
--
Fix Version/s: 8.6
 Assignee: Ignacio Vera
   Resolution: Fixed

> Speed up geometry queries by specialising Component2D spatial operations
> 
>
> Key: LUCENE-9273
> URL: https://issues.apache.org/jira/browse/LUCENE-9273
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a follow-up from an observation of [~jpountz] where it notice that 
> regardless of the spatial operation we are executing (e.g Intersects), we are 
> always calling the method component2D#relateTriangle which it would be less 
> expensive if we have an specialise method for intersects.
> The other frustrating thing is that regardless of the type of triangle we are 
> dealing with, we are decoding all points of the triangle. In addicting most 
> of the implementation of component2D#relateTriangle contain code that check 
> the type of triangle to then call specialise methods.
> In this issue it is proposed to replace the method component2D#relateTriangle 
> by the following methods:
> component2D#intersectsTriangle
> component2D#intersectsLine
> component2D#containsTriangle
> component2D#containsLine
> For consistency we add as well the methods:
> component2D#withinPoint
> component2D#withinLine
> Finally, the resolution of the triangle type his added to the decoding of the 
> triangle.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9273) Speed up geometry queries by specialising Component2D spatial operations

2020-04-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087946#comment-17087946
 ] 

ASF subversion and git services commented on LUCENE-9273:
-

Commit a001f135f229a4c08afe4b6eb3dc11da5db91605 in lucene-solr's branch 
refs/heads/branch_8x from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a001f13 ]

LUCENE-9273: Speed up geometry queries by specialising Component2D spatial 
operations (#1341)

Speed up geometry queries by specialising Component2D spatial operations. 
Instead of using a generic relate method for all relations, we use specialise 
methods for each one. In addition, the type of triangle is computed at 
deserialisation time, therefore we can be more selective when decoding points 
of a triangle


> Speed up geometry queries by specialising Component2D spatial operations
> 
>
> Key: LUCENE-9273
> URL: https://issues.apache.org/jira/browse/LUCENE-9273
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a follow-up from an observation of [~jpountz] where it notice that 
> regardless of the spatial operation we are executing (e.g Intersects), we are 
> always calling the method component2D#relateTriangle which it would be less 
> expensive if we have an specialise method for intersects.
> The other frustrating thing is that regardless of the type of triangle we are 
> dealing with, we are decoding all points of the triangle. In addicting most 
> of the implementation of component2D#relateTriangle contain code that check 
> the type of triangle to then call specialise methods.
> In this issue it is proposed to replace the method component2D#relateTriangle 
> by the following methods:
> component2D#intersectsTriangle
> component2D#intersectsLine
> component2D#containsTriangle
> component2D#containsLine
> For consistency we add as well the methods:
> component2D#withinPoint
> component2D#withinLine
> Finally, the resolution of the triangle type his added to the decoding of the 
> triangle.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9273) Speed up geometry queries by specialising Component2D spatial operations

2020-04-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087943#comment-17087943
 ] 

ASF subversion and git services commented on LUCENE-9273:
-

Commit f914e08b3647ad91eb97ada6ebefe1e96ebd5a89 in lucene-solr's branch 
refs/heads/master from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=f914e08 ]

LUCENE-9273: Speed up geometry queries by specialising Component2D spatial 
operations (#1341)

Speed up geometry queries by specialising Component2D spatial operations. 
Instead of using a generic relate method for all relations, we use specialise 
methods for each one. In addition, the type of triangle is computed at 
deserialisation time, therefore we can be more selective when decoding points 
of a triangle

> Speed up geometry queries by specialising Component2D spatial operations
> 
>
> Key: LUCENE-9273
> URL: https://issues.apache.org/jira/browse/LUCENE-9273
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ignacio Vera
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This is a follow-up from an observation of [~jpountz] where it notice that 
> regardless of the spatial operation we are executing (e.g Intersects), we are 
> always calling the method component2D#relateTriangle which it would be less 
> expensive if we have an specialise method for intersects.
> The other frustrating thing is that regardless of the type of triangle we are 
> dealing with, we are decoding all points of the triangle. In addicting most 
> of the implementation of component2D#relateTriangle contain code that check 
> the type of triangle to then call specialise methods.
> In this issue it is proposed to replace the method component2D#relateTriangle 
> by the following methods:
> component2D#intersectsTriangle
> component2D#intersectsLine
> component2D#containsTriangle
> component2D#containsLine
> For consistency we add as well the methods:
> component2D#withinPoint
> component2D#withinLine
> Finally, the resolution of the triangle type his added to the decoding of the 
> triangle.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14416:

Affects Version/s: 8.4
   8.4.1

> Nodes view doesn't work correctly when Solr is hosted on Windows
> 
>
> Key: SOLR-14416
> URL: https://issues.apache.org/jira/browse/SOLR-14416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.7.2, 8.1, 8.2, 8.1.1, 8.3, 8.4, 8.3.1, 8.4.1
>Reporter: Colvin Cowie
>Priority: Minor
> Fix For: 8.5
>
> Attachments: screenshot-1.png
>
>
> I sent a message about this on the mailing list a long time ago and got no 
> replies.
> Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
> fixed in 8.5, but I will check.
> On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
> Windows 10
> In the Nodes view of the Admin 
> UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
> However when you click it, the only thing that gets visibly refreshed is the 
> 'bar chart' (not sure what to call it - it's shown when you choose show 
> details) of the index shard size on disk. The other stats do not update.
> Also, when there is more than one node, only some of the node information is 
> shown
>  !screenshot-1.png! 
> Firefox dev console shows:
> {noformat}
> _Error: s.system.uptime is undefined
> nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
> v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
> processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
> scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
> $eval@http://localhost:8983/solr/libs/angular.js:14406:16
> $digest@http://localhost:8983/solr/libs/angular.js:14222:15
> $apply@http://localhost:8983/solr/libs/angular.js:14511:13
> done@http://localhost:8983/solr/libs/angular.js:9669:36
> completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
> requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_
> {noformat}
> The system response has upTimeMs in it for the JVM/JMX properties, but no 
> system/uptime
> {noformat}
> {
>   "responseHeader":{
> "status":0,
> "QTime":63},
>   "localhost:8983_solr":{
> "responseHeader":{
>   "status":0,
>   "QTime":49},
> "mode":"solrcloud",
> "zkHost":"localhost:9983",
> "solr_home":"...",
> "lucene":{
>   "solr-spec-version":"8.1.1",
>   "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:20:01",
>   "lucene-spec-version":"8.1.1",
>   "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:15:24"},
> "jvm":{
>   "version":"1.8.0_211 25.211-b12",
>   "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
>   "spec":{
> "vendor":"Oracle Corporation",
> "name":"Java Platform API Specification",
> "version":"1.8"},
>   "jre":{
> "vendor":"Oracle Corporation",
> "version":"1.8.0_211"},
>   "vm":{
> "vendor":"Oracle Corporation",
> "name":"Java HotSpot(TM) 64-Bit Server VM",
> "version":"25.211-b12"},
>   "processors":8,
>   "memory":{
> "free":"1.4 GB",
> "total":"2 GB",
> "max":"2 GB",
> "used":"566.7 MB (%27.7)",
> "raw":{
>   "free":1553268432,
>   "total":2147483648,
>   "max":2147483648,
>   "used":594215216,
>   "used%":27.670302242040634}},
>   "jmx":{
> "bootclasspath":"...",
> "classpath":"start.jar",
> "commandLineArgs":[...],
> "startTime":"2019-06-20T11:41:58.955Z",
> "upTimeMS":516602}},
> "system":{
>   "name":"Windows 10",
>   "arch":"amd64",
>   "availableProcessors":8,
>   "systemLoadAverage":-1.0,
>   "version":"10.0",
>   "committedVirtualMemorySize":2709114880,
>   "freePhysicalMemorySize":16710127616,
>   "freeSwapSpaceSize":16422531072,
>   "processCpuLoad":0.13941671744473663,
>   "processCpuTime":194609375000,
>   "systemCpuLoad":0.25816002967796037,
>   "totalPhysicalMemorySize":34261250048,
>   "totalSwapSpaceSize":39361523712},
> "node":"localhost:8983_solr"}}
> {noformat}
> The SystemInfoHandler does this:
> {code}
> // Try some command line things:
> try {
>   if (!Constants.WINDOWS) {
> info.add( "uname",  execute( "uname -a" ) );
> info.add( "uptime", execute( "uptime" ) );
>   }
> } catch( Exception ex ) {
>   log.warn("Unable to execute command line tools 

[jira] [Comment Edited] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087916#comment-17087916
 ] 

Colvin Cowie edited comment on SOLR-14416 at 4/20/20, 4:44 PM:
---

Ah, it's already been fixed by SOLR-13983 in 8.5 :)

I didn't find it because I searched for the js error in JIRA, rather than the 
SystemInfoHandler class


was (Author: cjcowie):
Ah, it's already been fixed in 8.5 :)

I didn't find because I searched for the js error in JIRA, rather than the 
SystemInfoHandler class

> Nodes view doesn't work correctly when Solr is hosted on Windows
> 
>
> Key: SOLR-14416
> URL: https://issues.apache.org/jira/browse/SOLR-14416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.7.2, 8.1, 8.2, 8.1.1, 8.3, 8.4, 8.3.1, 8.4.1
>Reporter: Colvin Cowie
>Priority: Minor
> Fix For: 8.5
>
> Attachments: screenshot-1.png
>
>
> I sent a message about this on the mailing list a long time ago and got no 
> replies.
> Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
> fixed in 8.5, but I will check.
> On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
> Windows 10
> In the Nodes view of the Admin 
> UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
> However when you click it, the only thing that gets visibly refreshed is the 
> 'bar chart' (not sure what to call it - it's shown when you choose show 
> details) of the index shard size on disk. The other stats do not update.
> Also, when there is more than one node, only some of the node information is 
> shown
>  !screenshot-1.png! 
> Firefox dev console shows:
> {noformat}
> _Error: s.system.uptime is undefined
> nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
> v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
> processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
> scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
> $eval@http://localhost:8983/solr/libs/angular.js:14406:16
> $digest@http://localhost:8983/solr/libs/angular.js:14222:15
> $apply@http://localhost:8983/solr/libs/angular.js:14511:13
> done@http://localhost:8983/solr/libs/angular.js:9669:36
> completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
> requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_
> {noformat}
> The system response has upTimeMs in it for the JVM/JMX properties, but no 
> system/uptime
> {noformat}
> {
>   "responseHeader":{
> "status":0,
> "QTime":63},
>   "localhost:8983_solr":{
> "responseHeader":{
>   "status":0,
>   "QTime":49},
> "mode":"solrcloud",
> "zkHost":"localhost:9983",
> "solr_home":"...",
> "lucene":{
>   "solr-spec-version":"8.1.1",
>   "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:20:01",
>   "lucene-spec-version":"8.1.1",
>   "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:15:24"},
> "jvm":{
>   "version":"1.8.0_211 25.211-b12",
>   "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
>   "spec":{
> "vendor":"Oracle Corporation",
> "name":"Java Platform API Specification",
> "version":"1.8"},
>   "jre":{
> "vendor":"Oracle Corporation",
> "version":"1.8.0_211"},
>   "vm":{
> "vendor":"Oracle Corporation",
> "name":"Java HotSpot(TM) 64-Bit Server VM",
> "version":"25.211-b12"},
>   "processors":8,
>   "memory":{
> "free":"1.4 GB",
> "total":"2 GB",
> "max":"2 GB",
> "used":"566.7 MB (%27.7)",
> "raw":{
>   "free":1553268432,
>   "total":2147483648,
>   "max":2147483648,
>   "used":594215216,
>   "used%":27.670302242040634}},
>   "jmx":{
> "bootclasspath":"...",
> "classpath":"start.jar",
> "commandLineArgs":[...],
> "startTime":"2019-06-20T11:41:58.955Z",
> "upTimeMS":516602}},
> "system":{
>   "name":"Windows 10",
>   "arch":"amd64",
>   "availableProcessors":8,
>   "systemLoadAverage":-1.0,
>   "version":"10.0",
>   "committedVirtualMemorySize":2709114880,
>   "freePhysicalMemorySize":16710127616,
>   "freeSwapSpaceSize":16422531072,
>   "processCpuLoad":0.13941671744473663,
>   "processCpuTime":194609375000,
>   "systemCpuLoad":0.25816002967796037,
>   "totalPhysicalMemorySize":34261250048,
>   "totalSwapSpaceSize":39361523712},
> 

[jira] [Resolved] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie resolved SOLR-14416.
-
Fix Version/s: 8.5
   Resolution: Duplicate

Ah, it's already been fixed in 8.5 :)

I didn't find because I searched for the js error in JIRA, rather than the 
SystemInfoHandler class

> Nodes view doesn't work correctly when Solr is hosted on Windows
> 
>
> Key: SOLR-14416
> URL: https://issues.apache.org/jira/browse/SOLR-14416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.7.2, 8.1, 8.2, 8.1.1, 8.3, 8.3.1
>Reporter: Colvin Cowie
>Priority: Minor
> Fix For: 8.5
>
> Attachments: screenshot-1.png
>
>
> I sent a message about this on the mailing list a long time ago and got no 
> replies.
> Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
> fixed in 8.5, but I will check.
> On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
> Windows 10
> In the Nodes view of the Admin 
> UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
> However when you click it, the only thing that gets visibly refreshed is the 
> 'bar chart' (not sure what to call it - it's shown when you choose show 
> details) of the index shard size on disk. The other stats do not update.
> Also, when there is more than one node, only some of the node information is 
> shown
>  !screenshot-1.png! 
> Firefox dev console shows:
> {noformat}
> _Error: s.system.uptime is undefined
> nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
> v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
> processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
> scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
> $eval@http://localhost:8983/solr/libs/angular.js:14406:16
> $digest@http://localhost:8983/solr/libs/angular.js:14222:15
> $apply@http://localhost:8983/solr/libs/angular.js:14511:13
> done@http://localhost:8983/solr/libs/angular.js:9669:36
> completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
> requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_
> {noformat}
> The system response has upTimeMs in it for the JVM/JMX properties, but no 
> system/uptime
> {noformat}
> {
>   "responseHeader":{
> "status":0,
> "QTime":63},
>   "localhost:8983_solr":{
> "responseHeader":{
>   "status":0,
>   "QTime":49},
> "mode":"solrcloud",
> "zkHost":"localhost:9983",
> "solr_home":"...",
> "lucene":{
>   "solr-spec-version":"8.1.1",
>   "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:20:01",
>   "lucene-spec-version":"8.1.1",
>   "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:15:24"},
> "jvm":{
>   "version":"1.8.0_211 25.211-b12",
>   "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
>   "spec":{
> "vendor":"Oracle Corporation",
> "name":"Java Platform API Specification",
> "version":"1.8"},
>   "jre":{
> "vendor":"Oracle Corporation",
> "version":"1.8.0_211"},
>   "vm":{
> "vendor":"Oracle Corporation",
> "name":"Java HotSpot(TM) 64-Bit Server VM",
> "version":"25.211-b12"},
>   "processors":8,
>   "memory":{
> "free":"1.4 GB",
> "total":"2 GB",
> "max":"2 GB",
> "used":"566.7 MB (%27.7)",
> "raw":{
>   "free":1553268432,
>   "total":2147483648,
>   "max":2147483648,
>   "used":594215216,
>   "used%":27.670302242040634}},
>   "jmx":{
> "bootclasspath":"...",
> "classpath":"start.jar",
> "commandLineArgs":[...],
> "startTime":"2019-06-20T11:41:58.955Z",
> "upTimeMS":516602}},
> "system":{
>   "name":"Windows 10",
>   "arch":"amd64",
>   "availableProcessors":8,
>   "systemLoadAverage":-1.0,
>   "version":"10.0",
>   "committedVirtualMemorySize":2709114880,
>   "freePhysicalMemorySize":16710127616,
>   "freeSwapSpaceSize":16422531072,
>   "processCpuLoad":0.13941671744473663,
>   "processCpuTime":194609375000,
>   "systemCpuLoad":0.25816002967796037,
>   "totalPhysicalMemorySize":34261250048,
>   "totalSwapSpaceSize":39361523712},
> "node":"localhost:8983_solr"}}
> {noformat}
> The SystemInfoHandler does this:
> {code}
> // Try some command line things:
> try {
>   if (!Constants.WINDOWS) {
> info.add( "uname",  execute( "uname -a" ) );
> info.add( 

[jira] [Commented] (LUCENE-9332) Migrate validate source patterns task from ant/groovy to gradle/groovy

2020-04-20 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087887#comment-17087887
 ] 

Mike Drob commented on LUCENE-9332:
---

WIP PR - https://github.com/apache/lucene-solr/pull/1441

> Migrate validate source patterns task from ant/groovy to gradle/groovy
> --
>
> Key: LUCENE-9332
> URL: https://issues.apache.org/jira/browse/LUCENE-9332
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Mike Drob
>Assignee: Mike Drob
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> If we can migrate the validate source patterns check to a gradle task instead 
> of an ant task, then we can also properly declare inputs and outputs and 
> gradle will be able to cache the results correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14391) Remove getDocSet's manual doc collection logic; remove ScoreFilter

2020-04-20 Thread Jason Gerlowski (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087858#comment-17087858
 ] 

Jason Gerlowski edited comment on SOLR-14391 at 4/20/20, 3:37 PM:
--

I attached a performance test driver to the early revisions of SOLR-13892, for 
others to verify the improvement.  One copy of it can be found as 
{{TestJoinQueryPerformance}} in this PR: 
https://github.com/apache/lucene-solr/pull/1159/files

That said, without having looked at this change here, I can't say whether that 
driver is relevant/helpful at all.  The driver runs at the http API level, so 
you'd have to put in the plumbing to expose both alternatives if it's not there 
currently.  The data generation that the drive does was also pretty specific to 
the "join" use case I was looking at in S-13892, so you'd likely need to change 
that pretty significantly to match things here.


was (Author: gerlowskija):
I attached some a performance test driver to the early revisions of SOLR-13892, 
for others to verify the improvement.  One copy of it can be found as 
{{TestJoinQueryPerformance}} in this PR: 
https://github.com/apache/lucene-solr/pull/1159/files

That said, without having looked at this change here, I can't say whether that 
driver is relevant/helpful at all.  The driver runs at the http API level, so 
you'd have to put in the plumbing to expose both alternatives if it's not there 
currently.  The data generation that the drive does was also pretty specific to 
the "join" use case I was looking at in S-13892, so you'd likely need to change 
that pretty significantly to match things here.

> Remove getDocSet's manual doc collection logic; remove ScoreFilter
> --
>
> Key: SOLR-14391
> URL: https://issues.apache.org/jira/browse/SOLR-14391
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.6
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{SolrIndexSearcher.getDocSet(List)}} calls getProcessedFilter and 
> then basically loops over doc IDs, passing them through the filter, and 
> passes them to the Collector.  This logic is redundant with what Lucene 
> searcher.search(query,collector) will ultimately do in BulkScorer, and so I 
> propose we remove all that code and delegate to Lucene.
> Also, the top of this method looks to see if any query implements the 
> "ScoreFilter" marker interface (only implemented by CollapsingPostFilter) and 
> if so delegates to {{getDocSetScore}} method instead.  That method has an 
> implementation close to what I propose getDocSet be changed to; so it can be 
> removed along with this ScoreFilter interface 
> searcher.search(query,collector).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14391) Remove getDocSet's manual doc collection logic; remove ScoreFilter

2020-04-20 Thread Jason Gerlowski (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087858#comment-17087858
 ] 

Jason Gerlowski commented on SOLR-14391:


I attached some a performance test driver to the early revisions of SOLR-13892, 
for others to verify the improvement.  One copy of it can be found as 
{{TestJoinQueryPerformance}} in this PR: 
https://github.com/apache/lucene-solr/pull/1159/files

That said, without having looked at this change here, I can't say whether that 
driver is relevant/helpful at all.  The driver runs at the http API level, so 
you'd have to put in the plumbing to expose both alternatives if it's not there 
currently.  The data generation that the drive does was also pretty specific to 
the "join" use case I was looking at in S-13892, so you'd likely need to change 
that pretty significantly to match things here.

> Remove getDocSet's manual doc collection logic; remove ScoreFilter
> --
>
> Key: SOLR-14391
> URL: https://issues.apache.org/jira/browse/SOLR-14391
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.6
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> {{SolrIndexSearcher.getDocSet(List)}} calls getProcessedFilter and 
> then basically loops over doc IDs, passing them through the filter, and 
> passes them to the Collector.  This logic is redundant with what Lucene 
> searcher.search(query,collector) will ultimately do in BulkScorer, and so I 
> propose we remove all that code and delegate to Lucene.
> Also, the top of this method looks to see if any query implements the 
> "ScoreFilter" marker interface (only implemented by CollapsingPostFilter) and 
> if so delegates to {{getDocSetScore}} method instead.  That method has an 
> implementation close to what I propose getDocSet be changed to; so it can be 
> removed along with this ScoreFilter interface 
> searcher.search(query,collector).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-04-20 Thread Tomoko Uchida (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087856#comment-17087856
 ] 

Tomoko Uchida commented on LUCENE-9321:
---

I opened an issue for porting "changes-to-html" task and assigned myself on it: 
LUCENE-9333.

[~dweiss] // cc [~uschindler] [~sarowe]
About "process-webpages", I am very sorry but it's more than I can handle - i 
don't fully understand what the ant script does or how the target should be 
ported to gradle (and then cannot verify if the ported gradle task works 
correctly).





> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Major
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-9317) Resolve package name conflicts for StandardAnalyzer to allow Java module system support

2020-04-20 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087832#comment-17087832
 ] 

Uwe Schindler edited comment on LUCENE-9317 at 4/20/20, 3:12 PM:
-

Hi David,

on a first look, this is no too bad. I am still not sure if all stuff from the 
oal.utils package should be moved over to core.

The second thing that I already mentioned: The Factory base classes should be 
next to the TokenFilter, CharFilter, Tokenizer base classes. I know this would 
need to change all "extends" in all implementations, but maybe we can for a 
while keep a fake subclass CharFilterFactory, TokenizerFactory classes in utils 
(just as some backwards compatibility layer).

Also the changes here are not commitable - not even to master:
- The META-INF/services files need to be refactored, too. Tests for loading 
from SPI need to be added to core, too.
- Also many classes are auto-generated, so the whole logic in Ant/Gradle's 
build files need to be moved. E.g, the code to generate UnicodeData.java (see 
the header of that file).

Uwe

P.S.: You sent me a private mail on the weekend, did you get my response? I 
just want to make sure that it was not lost in spam checks.


was (Author: thetaphi):
Hi David,

on a first look, this is no too bad. I am still not sure if all stuff from the 
oal.utils package should be moved over to core.

The second thing that I already mentioned: The Factory base classes should be 
next to the TokenFilter, CharFilter, Tokenizer base classes. I know this would 
need to change all "extends" in all filters, but maybe we can for a while keep 
a fake subclass CharFilterFactory, TokenizerFactory classes in utils (just as 
some backwards compatibility layer).

Also the changes here are not commitable - not even to master:
- The META-INF/services files need to be refactored, too. Tests for loading 
from SPI need to be added to core, too.
- Also many classes are auto-generated, so the whole logic in Ant/Gradle's 
build files need to be moved. E.g, the code to generate UnicodeData.java (see 
the header of that file).

Uwe

P.S.: You sent me a private mail on the weekend, did you get my response? I 
just want to make sure that it was not lost in spam checks.

> Resolve package name conflicts for StandardAnalyzer to allow Java module 
> system support
> ---
>
> Key: LUCENE-9317
> URL: https://issues.apache.org/jira/browse/LUCENE-9317
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: master (9.0)
>Reporter: David Ryan
>Priority: Major
>  Labels: build, features
>
>  
> To allow Lucene to be modularised there are a few preparatory tasks to be 
> completed prior to this being possible.  The Java module system requires that 
> jars do not use the same package name in different jars.  The lucene-core and 
> lucene-analyzers-common both share the package 
> org.apache.lucene.analysis.standard.
> Possible resolutions to this issue are discussed by Uwe on the mailing list 
> here:
>  
> [http://mail-archives.apache.org/mod_mbox/lucene-dev/202004.mbox/%3CCAM21Rt8FHOq_JeUSELhsQJH0uN0eKBgduBQX4fQKxbs49TLqzA%40mail.gmail.com%3E]
> {quote}About StandardAnalyzer: Unfortunately I aggressively complained a 
> while back when Mike McCandless wanted to move standard analyzer out of the 
> analysis package into core (“for convenience”). This was a bad step, and IMHO 
> we should revert that or completely rename the packages and everything. The 
> problem here is: As the analysis services are only part of lucene-analyzers, 
> we had to leave the factory classes there, but move the implementation 
> classes in core. The package has to be the same. The only way around that is 
> to move the analysis factory framework also to core (I would not be against 
> that). This would include all factory base classes and the service loading 
> stuff. Then we can move standard analyzer and some of the filters/tokenizers 
> including their factories to core an that problem would be solved.
> {quote}
> There are two options here, either move factory framework into core or revert 
> StandardAnalyzer back to lucene-analyzers.  In the email, the solution lands 
> on reverting back as per the task list:
> {quote}Add some preparatory issues to cleanup class hierarchy: Move Analysis 
> SPI to core / remove StandardAnalyzer and related classes out of core back to 
> anaysis
> {quote}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9317) Resolve package name conflicts for StandardAnalyzer to allow Java module system support

2020-04-20 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087832#comment-17087832
 ] 

Uwe Schindler commented on LUCENE-9317:
---

Hi David,

on a first look, this is no too bad. I am still not sure if all stuff from the 
oal.utils package should be moved over to core.

The second thing that I already mentioned: The Factory base classes should be 
next to the TokenFilter, CharFilter, Tokenizer base classes. I know this would 
need to change all "extends" in all filters, but maybe we can for a while keep 
a fake subclass CharFilterFactory, TokenizerFactory classes in utils (just as 
some backwards compatibility layer).

Also the changes here are not commitable - not even to master:
- The META-INF/services files need to be refactored, too. Tests for loading 
from SPI need to be added to core, too.
- Also many classes are auto-generated, so the whole logic in Ant/Gradle's 
build files need to be moved. E.g, the code to generate UnicodeData.java (see 
the header of that file).

Uwe

P.S.: You sent me a private mail on the weekend, did you get my response? I 
just want to make sure that it was not lost in spam checks.

> Resolve package name conflicts for StandardAnalyzer to allow Java module 
> system support
> ---
>
> Key: LUCENE-9317
> URL: https://issues.apache.org/jira/browse/LUCENE-9317
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Affects Versions: master (9.0)
>Reporter: David Ryan
>Priority: Major
>  Labels: build, features
>
>  
> To allow Lucene to be modularised there are a few preparatory tasks to be 
> completed prior to this being possible.  The Java module system requires that 
> jars do not use the same package name in different jars.  The lucene-core and 
> lucene-analyzers-common both share the package 
> org.apache.lucene.analysis.standard.
> Possible resolutions to this issue are discussed by Uwe on the mailing list 
> here:
>  
> [http://mail-archives.apache.org/mod_mbox/lucene-dev/202004.mbox/%3CCAM21Rt8FHOq_JeUSELhsQJH0uN0eKBgduBQX4fQKxbs49TLqzA%40mail.gmail.com%3E]
> {quote}About StandardAnalyzer: Unfortunately I aggressively complained a 
> while back when Mike McCandless wanted to move standard analyzer out of the 
> analysis package into core (“for convenience”). This was a bad step, and IMHO 
> we should revert that or completely rename the packages and everything. The 
> problem here is: As the analysis services are only part of lucene-analyzers, 
> we had to leave the factory classes there, but move the implementation 
> classes in core. The package has to be the same. The only way around that is 
> to move the analysis factory framework also to core (I would not be against 
> that). This would include all factory base classes and the service loading 
> stuff. Then we can move standard analyzer and some of the filters/tokenizers 
> including their factories to core an that problem would be solved.
> {quote}
> There are two options here, either move factory framework into core or revert 
> StandardAnalyzer back to lucene-analyzers.  In the email, the solution lands 
> on reverting back as per the task list:
> {quote}Add some preparatory issues to cleanup class hierarchy: Move Analysis 
> SPI to core / remove StandardAnalyzer and related classes out of core back to 
> anaysis
> {quote}
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9333) Gradle task equivalent to ant's "change-to-html"

2020-04-20 Thread Tomoko Uchida (Jira)
Tomoko Uchida created LUCENE-9333:
-

 Summary: Gradle task equivalent to ant's "change-to-html"
 Key: LUCENE-9333
 URL: https://issues.apache.org/jira/browse/LUCENE-9333
 Project: Lucene - Core
  Issue Type: Task
  Components: general/build
Affects Versions: master (9.0)
Reporter: Tomoko Uchida
Assignee: Tomoko Uchida






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9330) Make SortField responsible for index sorting

2020-04-20 Thread Alan Woodward (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087826#comment-17087826
 ] 

Alan Woodward commented on LUCENE-9330:
---

I've updated the PR, could you have a look Uwe?

> Make SortField responsible for index sorting
> 
>
> Key: LUCENE-9330
> URL: https://issues.apache.org/jira/browse/LUCENE-9330
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Alan Woodward
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Index sorting is currently handled inside Sorter and MultiSorter, with 
> hard-coded implementations dependent on SortField types.  This means that you 
> can't sort by custom SortFields, and also that the logic for handling 
> specific sort types is split between several unrelated classes.
> SortFields should instead be able to implement their own index sorting 
> methods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9330) Make SortField responsible for index sorting

2020-04-20 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087821#comment-17087821
 ] 

Uwe Schindler commented on LUCENE-9330:
---

Thanks. And in the index we just save the name like for codecs. It would just 
read the name and then call SortFieldProvider.forName().

> Make SortField responsible for index sorting
> 
>
> Key: LUCENE-9330
> URL: https://issues.apache.org/jira/browse/LUCENE-9330
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Alan Woodward
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Index sorting is currently handled inside Sorter and MultiSorter, with 
> hard-coded implementations dependent on SortField types.  This means that you 
> can't sort by custom SortFields, and also that the logic for handling 
> specific sort types is split between several unrelated classes.
> SortFields should instead be able to implement their own index sorting 
> methods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] madrob opened a new pull request #1441: LUCENE-9332 validate source patterns using gradle

2020-04-20 Thread GitBox


madrob opened a new pull request #1441:
URL: https://github.com/apache/lucene-solr/pull/1441


   If we have a gradle task with declared InputFiles and OutputFile then Gradle 
can figure out when the check is still up to date and when it needs to be 
rerun. Most of this is just a straight copy from the groovy script. This can 
take up to 20s off of precommit.
   
   TODO:
   * Figure out how to enable the ratDocument part of this, it was failing on 
imports for me.
   * Split into modules so that we don't run the whole task each time
   * Possibly split by file types?
   
   The rat part is the main thing that I need help with, I haven't been able to 
figure out the right incantation of dependency declarations and build script 
declarations and whatever else we need to make it in scope. Any pointers 
appreciated.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9332) Migrate validate source patterns task from ant/groovy to gradle/groovy

2020-04-20 Thread Mike Drob (Jira)
Mike Drob created LUCENE-9332:
-

 Summary: Migrate validate source patterns task from ant/groovy to 
gradle/groovy
 Key: LUCENE-9332
 URL: https://issues.apache.org/jira/browse/LUCENE-9332
 Project: Lucene - Core
  Issue Type: Sub-task
  Components: general/build
Reporter: Mike Drob
Assignee: Mike Drob


If we can migrate the validate source patterns check to a gradle task instead 
of an ant task, then we can also properly declare inputs and outputs and gradle 
will be able to cache the results correctly.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9331) TestIndexWriterDelete.testDeletesOnDiskFull can run for minutes

2020-04-20 Thread Adrien Grand (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-9331:
-
Description: 
This seed reproduces for me on branch_8x:

ant test  -Dtestcase=TestIndexWriterDelete -Dtests.method=testUpdatesOnDiskFull 
-Dtests.seed=63F37B47DD88EF8A -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=twq -Dtests.timezone=Greenwich 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

I didn't wait for the test to finish (it looks like it would take a very long 
time) though logging suggests that it would converge... eventually. This seems 
to be a combination of slow convergence, mixed with slow features of 
MockDirectoryWrapper like sleeps when opening inputs and costly calls like 
checkIndex. 

  was:
This seed reproduces for me:

ant test  -Dtestcase=TestIndexWriterDelete -Dtests.method=testUpdatesOnDiskFull 
-Dtests.seed=63F37B47DD88EF8A -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=twq -Dtests.timezone=Greenwich 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

I didn't wait for the test to finish (it looks like it would take a very long 
time) though logging suggests that it would converge... eventually. This seems 
to be a combination of slow convergence, mixed with slow features of 
MockDirectoryWrapper like sleeps when opening inputs and costly calls like 
checkIndex. 


> TestIndexWriterDelete.testDeletesOnDiskFull can run for minutes
> ---
>
> Key: LUCENE-9331
> URL: https://issues.apache.org/jira/browse/LUCENE-9331
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Adrien Grand
>Priority: Minor
>
> This seed reproduces for me on branch_8x:
> ant test  -Dtestcase=TestIndexWriterDelete 
> -Dtests.method=testUpdatesOnDiskFull -Dtests.seed=63F37B47DD88EF8A 
> -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=twq -Dtests.timezone=Greenwich -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> I didn't wait for the test to finish (it looks like it would take a very long 
> time) though logging suggests that it would converge... eventually. This 
> seems to be a combination of slow convergence, mixed with slow features of 
> MockDirectoryWrapper like sleeps when opening inputs and costly calls like 
> checkIndex. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9331) TestIndexWriterDelete.testDeletesOnDiskFull can run for minutes

2020-04-20 Thread Adrien Grand (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087795#comment-17087795
 ] 

Adrien Grand commented on LUCENE-9331:
--

The following change fixed the issue for me but I'd like to make sure that it 
doesn't defeat the purpose of the test. It increments the amount of free space 
relatively to the current number in order to speed up convergence of the test.

{noformat}
diff --git 
a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java 
b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java
index e879019897b..2e08d3b7ee1 100644
--- a/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java
+++ b/lucene/core/src/test/org/apache/lucene/index/TestIndexWriterDelete.java
@@ -702,8 +702,8 @@ public class TestIndexWriterDelete extends LuceneTestCase {
   }
   dir.close();
 
-  // Try again with 10 more bytes of free space:
-  diskFree += 10;
+  // Try again with more bytes of free space:
+  diskFree += Math.max(10, diskFree >>> 3);
 }
 startDir.close();
   }
{noformat}

[~mikemccand] It looks like you are the most familiar with this test, what do 
you think?

> TestIndexWriterDelete.testDeletesOnDiskFull can run for minutes
> ---
>
> Key: LUCENE-9331
> URL: https://issues.apache.org/jira/browse/LUCENE-9331
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Adrien Grand
>Priority: Minor
>
> This seed reproduces for me:
> ant test  -Dtestcase=TestIndexWriterDelete 
> -Dtests.method=testUpdatesOnDiskFull -Dtests.seed=63F37B47DD88EF8A 
> -Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=twq -Dtests.timezone=Greenwich -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
> I didn't wait for the test to finish (it looks like it would take a very long 
> time) though logging suggests that it would converge... eventually. This 
> seems to be a combination of slow convergence, mixed with slow features of 
> MockDirectoryWrapper like sleeps when opening inputs and costly calls like 
> checkIndex. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9331) TestIndexWriterDelete.testDeletesOnDiskFull can run for minutes

2020-04-20 Thread Adrien Grand (Jira)
Adrien Grand created LUCENE-9331:


 Summary: TestIndexWriterDelete.testDeletesOnDiskFull can run for 
minutes
 Key: LUCENE-9331
 URL: https://issues.apache.org/jira/browse/LUCENE-9331
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Adrien Grand


This seed reproduces for me:

ant test  -Dtestcase=TestIndexWriterDelete -Dtests.method=testUpdatesOnDiskFull 
-Dtests.seed=63F37B47DD88EF8A -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=twq -Dtests.timezone=Greenwich 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

I didn't wait for the test to finish (it looks like it would take a very long 
time) though logging suggests that it would converge... eventually. This seems 
to be a combination of slow convergence, mixed with slow features of 
MockDirectoryWrapper like sleeps when opening inputs and costly calls like 
checkIndex. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-8319) NPE when creating pivot

2020-04-20 Thread Isabelle Giguere (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087774#comment-17087774
 ] 

Isabelle Giguere commented on SOLR-8319:


Patch attached, based on master.  Please read previous comment for details.

> NPE when creating pivot
> ---
>
> Key: SOLR-8319
> URL: https://issues.apache.org/jira/browse/SOLR-8319
> Project: Solr
>  Issue Type: Bug
>Reporter: Neil Ireson
>Priority: Major
> Attachments: SOLR-8319.patch
>
>
> I get a NPE, the trace is shown at the end.
> The problem seems to be this line in the getSubset method:
>   Query query = ft.getFieldQuery(null, field, pivotValue);
> Which takes a value from the index and then analyses it to create a query. I 
> believe the problem is that when my analysis process is applied twice it 
> results in a null query. OK this might be seen as my issue because of dodgy 
> analysis, I thought it might be because I have the wrong order with 
> LengthFilterFactory before EnglishPossessiveFilterFactory and 
> KStemFilterFactory, i.e.:
> 
> 
>  
> So that "cat's" -> "cat" -> "", however any filter order I tried still 
> resulted in a NPE, and perhaps there is a viable case where parsing a term 
> twice results in a null query.
> The thing is I don't see why when the query term comes from the index it has 
> to undergo any analysis. If the term is from the index can it not simply be 
> created using a TermQuery, which I would imagine would also be faster. I 
> altered the "getFieldQuery" line above to the following and that has fixed my 
> NPE issue.
>   Query query = new TermQuery(new Term(field.getName(), pivotValue));
> So far this hasn't caused any other issues but perhaps that is due to my use 
> of Solr, rather than actually fixing an issue. 
> o.a.s.c.SolrCore java.lang.NullPointerException
> at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
> at 
> org.apache.solr.util.ConcurrentLRUCache.get(ConcurrentLRUCache.java:91)
> at org.apache.solr.search.FastLRUCache.get(FastLRUCache.java:130)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:1296)
> at 
> org.apache.solr.handler.component.PivotFacetProcessor.getSubset(PivotFacetProcessor.java:375)
> at 
> org.apache.solr.handler.component.PivotFacetProcessor.doPivots(PivotFacetProcessor.java:305)
> at 
> org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:228)
> at 
> org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:170)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:262)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:277)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:214)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> 

[jira] [Updated] (SOLR-8319) NPE when creating pivot

2020-04-20 Thread Isabelle Giguere (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-8319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Isabelle Giguere updated SOLR-8319:
---
Attachment: SOLR-8319.patch

> NPE when creating pivot
> ---
>
> Key: SOLR-8319
> URL: https://issues.apache.org/jira/browse/SOLR-8319
> Project: Solr
>  Issue Type: Bug
>Reporter: Neil Ireson
>Priority: Major
> Attachments: SOLR-8319.patch
>
>
> I get a NPE, the trace is shown at the end.
> The problem seems to be this line in the getSubset method:
>   Query query = ft.getFieldQuery(null, field, pivotValue);
> Which takes a value from the index and then analyses it to create a query. I 
> believe the problem is that when my analysis process is applied twice it 
> results in a null query. OK this might be seen as my issue because of dodgy 
> analysis, I thought it might be because I have the wrong order with 
> LengthFilterFactory before EnglishPossessiveFilterFactory and 
> KStemFilterFactory, i.e.:
> 
> 
>  
> So that "cat's" -> "cat" -> "", however any filter order I tried still 
> resulted in a NPE, and perhaps there is a viable case where parsing a term 
> twice results in a null query.
> The thing is I don't see why when the query term comes from the index it has 
> to undergo any analysis. If the term is from the index can it not simply be 
> created using a TermQuery, which I would imagine would also be faster. I 
> altered the "getFieldQuery" line above to the following and that has fixed my 
> NPE issue.
>   Query query = new TermQuery(new Term(field.getName(), pivotValue));
> So far this hasn't caused any other issues but perhaps that is due to my use 
> of Solr, rather than actually fixing an issue. 
> o.a.s.c.SolrCore java.lang.NullPointerException
> at 
> java.util.concurrent.ConcurrentHashMap.get(ConcurrentHashMap.java:936)
> at 
> org.apache.solr.util.ConcurrentLRUCache.get(ConcurrentLRUCache.java:91)
> at org.apache.solr.search.FastLRUCache.get(FastLRUCache.java:130)
> at 
> org.apache.solr.search.SolrIndexSearcher.getDocSet(SolrIndexSearcher.java:1296)
> at 
> org.apache.solr.handler.component.PivotFacetProcessor.getSubset(PivotFacetProcessor.java:375)
> at 
> org.apache.solr.handler.component.PivotFacetProcessor.doPivots(PivotFacetProcessor.java:305)
> at 
> org.apache.solr.handler.component.PivotFacetProcessor.processSingle(PivotFacetProcessor.java:228)
> at 
> org.apache.solr.handler.component.PivotFacetProcessor.process(PivotFacetProcessor.java:170)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:262)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:277)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2068)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:669)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:462)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:214)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:179)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:499)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> 

[jira] [Commented] (LUCENE-9280) Add ability to skip non-competitive documents on field sort

2020-04-20 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087772#comment-17087772
 ] 

David Smiley commented on LUCENE-9280:
--

Cool!

> Add ability to skip non-competitive documents on field sort 
> 
>
> Key: LUCENE-9280
> URL: https://issues.apache.org/jira/browse/LUCENE-9280
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mayya Sharipova
>Priority: Minor
>  Time Spent: 14.5h
>  Remaining Estimate: 0h
>
> Today collectors, once they collect enough docs, can instruct scorers to 
> update their iterators to skip non-competitive documents. This is applicable 
> only for a case when we need top docs by _score.
> It would be nice to also have an ability to skip non-competitive docs when we 
> need top docs sorted by other fields different from _score. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on issue #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-04-20 Thread GitBox


dsmiley commented on issue #1351:
URL: https://github.com/apache/lucene-solr/pull/1351#issuecomment-616579840


   @mikemccand I'm curious about your statement:
   
   > The wikimedium1m corpus is really too small to draw strong conclusions -- 
I would use it to run a quick performance test, e.g. to see that it can run to 
completion, not dying with an exception, but then run the real test on 
wikimediumall.
   
   Should we infer that you don't think a 1M doc corpus is realistic in many 
production settings of Lucene?  That's the implication I'm reading from your 
statement, which I definitely do not agree with.  I've seen plenty of 
e-commerce search engines in the few-hundred K range (and lower), and also 
multi-tenant search where most tenants don't have much data in their system.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9330) Make SortField responsible for index sorting

2020-04-20 Thread Alan Woodward (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087635#comment-17087635
 ] 

Alan Woodward commented on LUCENE-9330:
---

I think this could work if we had an intermediary class, something like a 
SortFieldProvider, that would act as the SPI service and then build the 
relevant SortField based on serialization logic.  I'll give it a go, thanks Uwe.

> Make SortField responsible for index sorting
> 
>
> Key: LUCENE-9330
> URL: https://issues.apache.org/jira/browse/LUCENE-9330
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Alan Woodward
>Priority: Major
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Index sorting is currently handled inside Sorter and MultiSorter, with 
> hard-coded implementations dependent on SortField types.  This means that you 
> can't sort by custom SortFields, and also that the logic for handling 
> specific sort types is split between several unrelated classes.
> SortFields should instead be able to implement their own index sorting 
> methods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7788) fail precommit on unparameterised log messages and examine for wasted work/objects

2020-04-20 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087626#comment-17087626
 ] 

Erick Erickson commented on LUCENE-7788:


BTW, recent versions of this have a hard-coded list of all directories I've 
cleaned. You still have to specify "-PsrcDir=blah", which gets added to the 
list and all are checked. This check could be added as-is to precommit, but as 
long as I'm making rapid progress I don't see the need.



> fail precommit on unparameterised log messages and examine for wasted 
> work/objects
> --
>
> Key: LUCENE-7788
> URL: https://issues.apache.org/jira/browse/LUCENE-7788
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-7788.patch, LUCENE-7788.patch, gradle_only.patch, 
> gradle_only.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> SOLR-10415 would be removing existing unparameterised log.trace messages use 
> and once that is in place then this ticket's one-line change would be for 
> 'ant precommit' to reject any future unparameterised log.trace message use.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087624#comment-17087624
 ] 

Colvin Cowie commented on SOLR-14416:
-

I can do a patch for the SystemInfoHandler when I get a free minute.

> Nodes view doesn't work correctly when Solr is hosted on Windows
> 
>
> Key: SOLR-14416
> URL: https://issues.apache.org/jira/browse/SOLR-14416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.7.2, 8.1, 8.2, 8.1.1, 8.3, 8.3.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> I sent a message about this on the mailing list a long time ago and got no 
> replies.
> Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
> fixed in 8.5, but I will check.
> On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
> Windows 10
> In the Nodes view of the Admin 
> UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
> However when you click it, the only thing that gets visibly refreshed is the 
> 'bar chart' (not sure what to call it - it's shown when you choose show 
> details) of the index shard size on disk. The other stats do not update.
> Also, when there is more than one node, only some of the node information is 
> shown
>  !screenshot-1.png! 
> Firefox dev console shows:
> {noformat}
> _Error: s.system.uptime is undefined
> nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
> v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
> processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
> scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
> $eval@http://localhost:8983/solr/libs/angular.js:14406:16
> $digest@http://localhost:8983/solr/libs/angular.js:14222:15
> $apply@http://localhost:8983/solr/libs/angular.js:14511:13
> done@http://localhost:8983/solr/libs/angular.js:9669:36
> completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
> requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_
> {noformat}
> The system response has upTimeMs in it for the JVM/JMX properties, but no 
> system/uptime
> {noformat}
> {
>   "responseHeader":{
> "status":0,
> "QTime":63},
>   "localhost:8983_solr":{
> "responseHeader":{
>   "status":0,
>   "QTime":49},
> "mode":"solrcloud",
> "zkHost":"localhost:9983",
> "solr_home":"...",
> "lucene":{
>   "solr-spec-version":"8.1.1",
>   "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:20:01",
>   "lucene-spec-version":"8.1.1",
>   "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:15:24"},
> "jvm":{
>   "version":"1.8.0_211 25.211-b12",
>   "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
>   "spec":{
> "vendor":"Oracle Corporation",
> "name":"Java Platform API Specification",
> "version":"1.8"},
>   "jre":{
> "vendor":"Oracle Corporation",
> "version":"1.8.0_211"},
>   "vm":{
> "vendor":"Oracle Corporation",
> "name":"Java HotSpot(TM) 64-Bit Server VM",
> "version":"25.211-b12"},
>   "processors":8,
>   "memory":{
> "free":"1.4 GB",
> "total":"2 GB",
> "max":"2 GB",
> "used":"566.7 MB (%27.7)",
> "raw":{
>   "free":1553268432,
>   "total":2147483648,
>   "max":2147483648,
>   "used":594215216,
>   "used%":27.670302242040634}},
>   "jmx":{
> "bootclasspath":"...",
> "classpath":"start.jar",
> "commandLineArgs":[...],
> "startTime":"2019-06-20T11:41:58.955Z",
> "upTimeMS":516602}},
> "system":{
>   "name":"Windows 10",
>   "arch":"amd64",
>   "availableProcessors":8,
>   "systemLoadAverage":-1.0,
>   "version":"10.0",
>   "committedVirtualMemorySize":2709114880,
>   "freePhysicalMemorySize":16710127616,
>   "freeSwapSpaceSize":16422531072,
>   "processCpuLoad":0.13941671744473663,
>   "processCpuTime":194609375000,
>   "systemCpuLoad":0.25816002967796037,
>   "totalPhysicalMemorySize":34261250048,
>   "totalSwapSpaceSize":39361523712},
> "node":"localhost:8983_solr"}}
> {noformat}
> The SystemInfoHandler does this:
> {code}
> // Try some command line things:
> try {
>   if (!Constants.WINDOWS) {
> info.add( "uname",  execute( "uname -a" ) );
> info.add( "uptime", execute( "uptime" ) );
>   }
> } catch( Exception ex ) {
>   log.warn("Unable to execute 

[GitHub] [lucene-solr] uschindler commented on issue #1440: LUCENE-9330: Make SortFields responsible for index sorting and serialization

2020-04-20 Thread GitBox


uschindler commented on issue #1440:
URL: https://github.com/apache/lucene-solr/pull/1440#issuecomment-616486419


   I left a comment on the issue about the serialization of class names: Don't 
do this, will break with Java's module system. We should use NamedSPI for that, 
like PostingsFormat or Codecs.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9330) Make SortField responsible for index sorting

2020-04-20 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087613#comment-17087613
 ] 

Uwe Schindler commented on LUCENE-9330:
---

Hi,

I checked your PR. The first thing that I really did not like was that you 
serialize the class name of the Sortfield into the index. IMHO there should be 
SPI used for that in the same way like postingsformats.

My idea would be to make SortField abstract and let it implement NamedSPI (like 
codecs) and then add SortField.forName(). By this the index format gets 
independent

This also helps with Java's module system (see other issue), because when 
adding modules you have the problem that you can't load classes easily from 
different modules. This only works with SPI, your custom SortField 
implementation just need to be exported as service provider by your code.

Maybe not misuse SortField for all that stuff and instead have some new class 
that is just another codec component like PostingsFormat, responsible for 
Sorting. This one can also return a sort field instance used for search.

> Make SortField responsible for index sorting
> 
>
> Key: LUCENE-9330
> URL: https://issues.apache.org/jira/browse/LUCENE-9330
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Alan Woodward
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Index sorting is currently handled inside Sorter and MultiSorter, with 
> hard-coded implementations dependent on SortField types.  This means that you 
> can't sort by custom SortFields, and also that the logic for handling 
> specific sort types is split between several unrelated classes.
> SortFields should instead be able to implement their own index sorting 
> methods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17087611#comment-17087611
 ] 

Jan Høydahl commented on SOLR-14416:


Definitely looks like a bug. We should check for 'undefined' for uptime since 
it will never be filled for Windows systems. Or the 'SystemInfoHandler' could 
fill the two vars with uname=Windows and uptime="N/A" so you don't need to 
touch JS at all.
Do you feel comfortable to make a PR / patch?

> Nodes view doesn't work correctly when Solr is hosted on Windows
> 
>
> Key: SOLR-14416
> URL: https://issues.apache.org/jira/browse/SOLR-14416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.7.2, 8.1, 8.2, 8.1.1, 8.3, 8.3.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> I sent a message about this on the mailing list a long time ago and got no 
> replies.
> Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
> fixed in 8.5, but I will check.
> On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
> Windows 10
> In the Nodes view of the Admin 
> UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
> However when you click it, the only thing that gets visibly refreshed is the 
> 'bar chart' (not sure what to call it - it's shown when you choose show 
> details) of the index shard size on disk. The other stats do not update.
> Also, when there is more than one node, only some of the node information is 
> shown
>  !screenshot-1.png! 
> Firefox dev console shows:
> {noformat}
> _Error: s.system.uptime is undefined
> nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
> v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
> processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
> scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
> $eval@http://localhost:8983/solr/libs/angular.js:14406:16
> $digest@http://localhost:8983/solr/libs/angular.js:14222:15
> $apply@http://localhost:8983/solr/libs/angular.js:14511:13
> done@http://localhost:8983/solr/libs/angular.js:9669:36
> completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
> requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_
> {noformat}
> The system response has upTimeMs in it for the JVM/JMX properties, but no 
> system/uptime
> {noformat}
> {
>   "responseHeader":{
> "status":0,
> "QTime":63},
>   "localhost:8983_solr":{
> "responseHeader":{
>   "status":0,
>   "QTime":49},
> "mode":"solrcloud",
> "zkHost":"localhost:9983",
> "solr_home":"...",
> "lucene":{
>   "solr-spec-version":"8.1.1",
>   "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:20:01",
>   "lucene-spec-version":"8.1.1",
>   "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:15:24"},
> "jvm":{
>   "version":"1.8.0_211 25.211-b12",
>   "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
>   "spec":{
> "vendor":"Oracle Corporation",
> "name":"Java Platform API Specification",
> "version":"1.8"},
>   "jre":{
> "vendor":"Oracle Corporation",
> "version":"1.8.0_211"},
>   "vm":{
> "vendor":"Oracle Corporation",
> "name":"Java HotSpot(TM) 64-Bit Server VM",
> "version":"25.211-b12"},
>   "processors":8,
>   "memory":{
> "free":"1.4 GB",
> "total":"2 GB",
> "max":"2 GB",
> "used":"566.7 MB (%27.7)",
> "raw":{
>   "free":1553268432,
>   "total":2147483648,
>   "max":2147483648,
>   "used":594215216,
>   "used%":27.670302242040634}},
>   "jmx":{
> "bootclasspath":"...",
> "classpath":"start.jar",
> "commandLineArgs":[...],
> "startTime":"2019-06-20T11:41:58.955Z",
> "upTimeMS":516602}},
> "system":{
>   "name":"Windows 10",
>   "arch":"amd64",
>   "availableProcessors":8,
>   "systemLoadAverage":-1.0,
>   "version":"10.0",
>   "committedVirtualMemorySize":2709114880,
>   "freePhysicalMemorySize":16710127616,
>   "freeSwapSpaceSize":16422531072,
>   "processCpuLoad":0.13941671744473663,
>   "processCpuTime":194609375000,
>   "systemCpuLoad":0.25816002967796037,
>   "totalPhysicalMemorySize":34261250048,
>   "totalSwapSpaceSize":39361523712},
> "node":"localhost:8983_solr"}}
> {noformat}
> The SystemInfoHandler does this:
> {code}
> // Try some command line things:
> try {

[GitHub] [lucene-solr] romseygeek commented on issue #1440: LUCENE-9330: Make SortFields responsible for index sorting and serialization

2020-04-20 Thread GitBox


romseygeek commented on issue #1440:
URL: https://github.com/apache/lucene-solr/pull/1440#issuecomment-616480246


   This is quite a big PR, but most of it is around the creation of a new 
Lucene86Codec.  The important bit to review are the changes to 
DefaultIndexingChain and the DocValuesWriters, and the implementation of 
IndexSorter for existing sort fields (int, long, float, double, string, 
sortednumeric, sortedset).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] romseygeek opened a new pull request #1440: LUCENE-9330: Make SortFields responsible for index sorting and serialization

2020-04-20 Thread GitBox


romseygeek opened a new pull request #1440:
URL: https://github.com/apache/lucene-solr/pull/1440


   This commit adds a new class `IndexSorter` which handles how a sort should 
be applied
   to documents in an index:
   * how to serialize/deserialize sort info in the segment header
   * how to sort documents within a segment
   * how to sort documents from merging segments
   
   SortField has a `getIndexSorter()` method, which will return `null` if the 
sort cannot be used
   to sort an index (eg if it uses scores or other query-dependent values).  
This also requires a
   new Codec as there is a change to the SegmentInfoFormat



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9330) Make SortField responsible for index sorting

2020-04-20 Thread Alan Woodward (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-9330:
--
Parent: LUCENE-9326
Issue Type: Sub-task  (was: Improvement)

> Make SortField responsible for index sorting
> 
>
> Key: LUCENE-9330
> URL: https://issues.apache.org/jira/browse/LUCENE-9330
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Alan Woodward
>Priority: Major
>
> Index sorting is currently handled inside Sorter and MultiSorter, with 
> hard-coded implementations dependent on SortField types.  This means that you 
> can't sort by custom SortFields, and also that the logic for handling 
> specific sort types is split between several unrelated classes.
> SortFields should instead be able to implement their own index sorting 
> methods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (LUCENE-9330) Make SortField responsible for index sorting

2020-04-20 Thread Alan Woodward (Jira)
Alan Woodward created LUCENE-9330:
-

 Summary: Make SortField responsible for index sorting
 Key: LUCENE-9330
 URL: https://issues.apache.org/jira/browse/LUCENE-9330
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Alan Woodward


Index sorting is currently handled inside Sorter and MultiSorter, with 
hard-coded implementations dependent on SortField types.  This means that you 
can't sort by custom SortFields, and also that the logic for handling specific 
sort types is split between several unrelated classes.

SortFields should instead be able to implement their own index sorting methods.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14416:

Attachment: screenshot-1.png

> Nodes view doesn't work correctly when Solr is hosted on Windows
> 
>
> Key: SOLR-14416
> URL: https://issues.apache.org/jira/browse/SOLR-14416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.7.2, 8.1, 8.2, 8.1.1, 8.3, 8.3.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> I sent a message about this on the mailing list a long time ago and got no 
> replies.
> Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
> fixed in 8.5, but I will check.
> On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
> Windows 10
> In the Nodes view of the Admin 
> UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
> However when you click it, the only thing that gets visibly refreshed is the 
> 'bar chart' (not sure what to call it - it's shown when you choose show 
> details) of the index shard size on disk. The other stats do not update.
> Also, when there is more than one node, only some of the node information is 
> shown
>  !image-2020-04-20-10-37-22-308.png!  
> Firefox dev console shows:
> {noformat}
> _Error: s.system.uptime is undefined
> nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
> v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
> processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
> scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
> $eval@http://localhost:8983/solr/libs/angular.js:14406:16
> $digest@http://localhost:8983/solr/libs/angular.js:14222:15
> $apply@http://localhost:8983/solr/libs/angular.js:14511:13
> done@http://localhost:8983/solr/libs/angular.js:9669:36
> completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
> requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_
> {noformat}
> The system response has upTimeMs in it for the JVM/JMX properties, but no 
> system/uptime
> {noformat}
> {
>   "responseHeader":{
> "status":0,
> "QTime":63},
>   "localhost:8983_solr":{
> "responseHeader":{
>   "status":0,
>   "QTime":49},
> "mode":"solrcloud",
> "zkHost":"localhost:9983",
> "solr_home":"...",
> "lucene":{
>   "solr-spec-version":"8.1.1",
>   "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:20:01",
>   "lucene-spec-version":"8.1.1",
>   "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:15:24"},
> "jvm":{
>   "version":"1.8.0_211 25.211-b12",
>   "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
>   "spec":{
> "vendor":"Oracle Corporation",
> "name":"Java Platform API Specification",
> "version":"1.8"},
>   "jre":{
> "vendor":"Oracle Corporation",
> "version":"1.8.0_211"},
>   "vm":{
> "vendor":"Oracle Corporation",
> "name":"Java HotSpot(TM) 64-Bit Server VM",
> "version":"25.211-b12"},
>   "processors":8,
>   "memory":{
> "free":"1.4 GB",
> "total":"2 GB",
> "max":"2 GB",
> "used":"566.7 MB (%27.7)",
> "raw":{
>   "free":1553268432,
>   "total":2147483648,
>   "max":2147483648,
>   "used":594215216,
>   "used%":27.670302242040634}},
>   "jmx":{
> "bootclasspath":"...",
> "classpath":"start.jar",
> "commandLineArgs":[...],
> "startTime":"2019-06-20T11:41:58.955Z",
> "upTimeMS":516602}},
> "system":{
>   "name":"Windows 10",
>   "arch":"amd64",
>   "availableProcessors":8,
>   "systemLoadAverage":-1.0,
>   "version":"10.0",
>   "committedVirtualMemorySize":2709114880,
>   "freePhysicalMemorySize":16710127616,
>   "freeSwapSpaceSize":16422531072,
>   "processCpuLoad":0.13941671744473663,
>   "processCpuTime":194609375000,
>   "systemCpuLoad":0.25816002967796037,
>   "totalPhysicalMemorySize":34261250048,
>   "totalSwapSpaceSize":39361523712},
> "node":"localhost:8983_solr"}}
> {noformat}
> The SystemInfoHandler does this:
> {code}
> // Try some command line things:
> try {
>   if (!Constants.WINDOWS) {
> info.add( "uname",  execute( "uname -a" ) );
> info.add( "uptime", execute( "uptime" ) );
>   }
> } catch( Exception ex ) {
>   log.warn("Unable to execute command line tools to get operating system 
> properties.", ex);

[jira] [Updated] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14416:

Description: 
I sent a message about this on the mailing list a long time ago and got no 
replies.
Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
fixed in 8.5, but I will check.


On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
Windows 10

In the Nodes view of the Admin 
UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
However when you click it, the only thing that gets visibly refreshed is the 
'bar chart' (not sure what to call it - it's shown when you choose show 
details) of the index shard size on disk. The other stats do not update.
Also, when there is more than one node, only some of the node information is 
shown
 !screenshot-1.png! 

Firefox dev console shows:


{noformat}
_Error: s.system.uptime is undefined
nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
$eval@http://localhost:8983/solr/libs/angular.js:14406:16
$digest@http://localhost:8983/solr/libs/angular.js:14222:15
$apply@http://localhost:8983/solr/libs/angular.js:14511:13
done@http://localhost:8983/solr/libs/angular.js:9669:36
completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_
{noformat}


The system response has upTimeMs in it for the JVM/JMX properties, but no 
system/uptime


{noformat}
{
  "responseHeader":{
"status":0,
"QTime":63},
  "localhost:8983_solr":{
"responseHeader":{
  "status":0,
  "QTime":49},
"mode":"solrcloud",
"zkHost":"localhost:9983",
"solr_home":"...",
"lucene":{
  "solr-spec-version":"8.1.1",
  "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - ab 
- 2019-05-22 15:20:01",
  "lucene-spec-version":"8.1.1",
  "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
ab - 2019-05-22 15:15:24"},
"jvm":{
  "version":"1.8.0_211 25.211-b12",
  "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
  "spec":{
"vendor":"Oracle Corporation",
"name":"Java Platform API Specification",
"version":"1.8"},
  "jre":{
"vendor":"Oracle Corporation",
"version":"1.8.0_211"},
  "vm":{
"vendor":"Oracle Corporation",
"name":"Java HotSpot(TM) 64-Bit Server VM",
"version":"25.211-b12"},
  "processors":8,
  "memory":{
"free":"1.4 GB",
"total":"2 GB",
"max":"2 GB",
"used":"566.7 MB (%27.7)",
"raw":{
  "free":1553268432,
  "total":2147483648,
  "max":2147483648,
  "used":594215216,
  "used%":27.670302242040634}},
  "jmx":{
"bootclasspath":"...",
"classpath":"start.jar",
"commandLineArgs":[...],
"startTime":"2019-06-20T11:41:58.955Z",
"upTimeMS":516602}},
"system":{
  "name":"Windows 10",
  "arch":"amd64",
  "availableProcessors":8,
  "systemLoadAverage":-1.0,
  "version":"10.0",
  "committedVirtualMemorySize":2709114880,
  "freePhysicalMemorySize":16710127616,
  "freeSwapSpaceSize":16422531072,
  "processCpuLoad":0.13941671744473663,
  "processCpuTime":194609375000,
  "systemCpuLoad":0.25816002967796037,
  "totalPhysicalMemorySize":34261250048,
  "totalSwapSpaceSize":39361523712},
"node":"localhost:8983_solr"}}
{noformat}


The SystemInfoHandler does this:
{code}
// Try some command line things:
try {
  if (!Constants.WINDOWS) {
info.add( "uname",  execute( "uname -a" ) );
info.add( "uptime", execute( "uptime" ) );
  }
} catch( Exception ex ) {
  log.warn("Unable to execute command line tools to get operating system 
properties.", ex);
}
{code}

Which appears to be the problem since it won't return uname and uptime on 
windows, but the UI expects them

If I run uptime from my Ubuntu shell in WSL the output is like "16:41:40 up 7 
min,  0 users,  load average: 0.52, 0.58, 0.59". If I make the System handler 
return that then there are no further dev console errors...
However, even with that "fixed", refresh doesn't actually seem to refresh 
anything other than the graph.

In contrast, refreshing the System (e.g. memory) section on the main dashboard 
does correctly update.

The missing "uptime" from the response looks like the problem, but isn't 
actually stopping refresh from doing anything when I return an uptime. So, is 
the Nodes view supposed to be refreshing everything, or are my expectations 
wrong?

  was:
I sent a 

[jira] [Updated] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14416:

Attachment: (was: image-2020-04-20-10-37-22-308.png)

> Nodes view doesn't work correctly when Solr is hosted on Windows
> 
>
> Key: SOLR-14416
> URL: https://issues.apache.org/jira/browse/SOLR-14416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.7.2, 8.1, 8.2, 8.1.1, 8.3, 8.3.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: screenshot-1.png
>
>
> I sent a message about this on the mailing list a long time ago and got no 
> replies.
> Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
> fixed in 8.5, but I will check.
> On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
> Windows 10
> In the Nodes view of the Admin 
> UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
> However when you click it, the only thing that gets visibly refreshed is the 
> 'bar chart' (not sure what to call it - it's shown when you choose show 
> details) of the index shard size on disk. The other stats do not update.
> Also, when there is more than one node, only some of the node information is 
> shown
>  !image-2020-04-20-10-37-22-308.png!  
> Firefox dev console shows:
> {noformat}
> _Error: s.system.uptime is undefined
> nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
> v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
> processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
> scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
> $eval@http://localhost:8983/solr/libs/angular.js:14406:16
> $digest@http://localhost:8983/solr/libs/angular.js:14222:15
> $apply@http://localhost:8983/solr/libs/angular.js:14511:13
> done@http://localhost:8983/solr/libs/angular.js:9669:36
> completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
> requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_
> {noformat}
> The system response has upTimeMs in it for the JVM/JMX properties, but no 
> system/uptime
> {noformat}
> {
>   "responseHeader":{
> "status":0,
> "QTime":63},
>   "localhost:8983_solr":{
> "responseHeader":{
>   "status":0,
>   "QTime":49},
> "mode":"solrcloud",
> "zkHost":"localhost:9983",
> "solr_home":"...",
> "lucene":{
>   "solr-spec-version":"8.1.1",
>   "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:20:01",
>   "lucene-spec-version":"8.1.1",
>   "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:15:24"},
> "jvm":{
>   "version":"1.8.0_211 25.211-b12",
>   "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
>   "spec":{
> "vendor":"Oracle Corporation",
> "name":"Java Platform API Specification",
> "version":"1.8"},
>   "jre":{
> "vendor":"Oracle Corporation",
> "version":"1.8.0_211"},
>   "vm":{
> "vendor":"Oracle Corporation",
> "name":"Java HotSpot(TM) 64-Bit Server VM",
> "version":"25.211-b12"},
>   "processors":8,
>   "memory":{
> "free":"1.4 GB",
> "total":"2 GB",
> "max":"2 GB",
> "used":"566.7 MB (%27.7)",
> "raw":{
>   "free":1553268432,
>   "total":2147483648,
>   "max":2147483648,
>   "used":594215216,
>   "used%":27.670302242040634}},
>   "jmx":{
> "bootclasspath":"...",
> "classpath":"start.jar",
> "commandLineArgs":[...],
> "startTime":"2019-06-20T11:41:58.955Z",
> "upTimeMS":516602}},
> "system":{
>   "name":"Windows 10",
>   "arch":"amd64",
>   "availableProcessors":8,
>   "systemLoadAverage":-1.0,
>   "version":"10.0",
>   "committedVirtualMemorySize":2709114880,
>   "freePhysicalMemorySize":16710127616,
>   "freeSwapSpaceSize":16422531072,
>   "processCpuLoad":0.13941671744473663,
>   "processCpuTime":194609375000,
>   "systemCpuLoad":0.25816002967796037,
>   "totalPhysicalMemorySize":34261250048,
>   "totalSwapSpaceSize":39361523712},
> "node":"localhost:8983_solr"}}
> {noformat}
> The SystemInfoHandler does this:
> {code}
> // Try some command line things:
> try {
>   if (!Constants.WINDOWS) {
> info.add( "uname",  execute( "uname -a" ) );
> info.add( "uptime", execute( "uptime" ) );
>   }
> } catch( Exception ex ) {
>   log.warn("Unable to execute command line tools to get operating 

[jira] [Updated] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14416:

Affects Version/s: 7.7.2
   8.1
   8.2
   8.1.1
   8.3
   8.3.1

> Nodes view doesn't work correctly when Solr is hosted on Windows
> 
>
> Key: SOLR-14416
> URL: https://issues.apache.org/jira/browse/SOLR-14416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.7.2, 8.1, 8.2, 8.1.1, 8.3, 8.3.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: image-2020-04-20-10-37-22-308.png
>
>
> I sent a message about this on the mailing list a long time ago and got no 
> replies.
> Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
> fixed in 8.5, but I will check.
> On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
> Windows 10
> In the Nodes view of the Admin 
> UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
> However when you click it, the only thing that gets visibly refreshed is the 
> 'bar chart' (not sure what to call it - it's shown when you choose show 
> details) of the index shard size on disk. The other stats do not update.
> Also, when there is more than one node, only some of the node information is 
> shown
>  !image-2020-04-20-10-37-22-308.png!  
> Firefox dev console shows:
> {noformat}
> _Error: s.system.uptime is undefined
> nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
> v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
> processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
> scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
> $eval@http://localhost:8983/solr/libs/angular.js:14406:16
> $digest@http://localhost:8983/solr/libs/angular.js:14222:15
> $apply@http://localhost:8983/solr/libs/angular.js:14511:13
> done@http://localhost:8983/solr/libs/angular.js:9669:36
> completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
> requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_
> {noformat}
> The system response has upTimeMs in it for the JVM/JMX properties, but no 
> system/uptime
> {noformat}
> {
>   "responseHeader":{
> "status":0,
> "QTime":63},
>   "localhost:8983_solr":{
> "responseHeader":{
>   "status":0,
>   "QTime":49},
> "mode":"solrcloud",
> "zkHost":"localhost:9983",
> "solr_home":"...",
> "lucene":{
>   "solr-spec-version":"8.1.1",
>   "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:20:01",
>   "lucene-spec-version":"8.1.1",
>   "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:15:24"},
> "jvm":{
>   "version":"1.8.0_211 25.211-b12",
>   "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
>   "spec":{
> "vendor":"Oracle Corporation",
> "name":"Java Platform API Specification",
> "version":"1.8"},
>   "jre":{
> "vendor":"Oracle Corporation",
> "version":"1.8.0_211"},
>   "vm":{
> "vendor":"Oracle Corporation",
> "name":"Java HotSpot(TM) 64-Bit Server VM",
> "version":"25.211-b12"},
>   "processors":8,
>   "memory":{
> "free":"1.4 GB",
> "total":"2 GB",
> "max":"2 GB",
> "used":"566.7 MB (%27.7)",
> "raw":{
>   "free":1553268432,
>   "total":2147483648,
>   "max":2147483648,
>   "used":594215216,
>   "used%":27.670302242040634}},
>   "jmx":{
> "bootclasspath":"...",
> "classpath":"start.jar",
> "commandLineArgs":[...],
> "startTime":"2019-06-20T11:41:58.955Z",
> "upTimeMS":516602}},
> "system":{
>   "name":"Windows 10",
>   "arch":"amd64",
>   "availableProcessors":8,
>   "systemLoadAverage":-1.0,
>   "version":"10.0",
>   "committedVirtualMemorySize":2709114880,
>   "freePhysicalMemorySize":16710127616,
>   "freeSwapSpaceSize":16422531072,
>   "processCpuLoad":0.13941671744473663,
>   "processCpuTime":194609375000,
>   "systemCpuLoad":0.25816002967796037,
>   "totalPhysicalMemorySize":34261250048,
>   "totalSwapSpaceSize":39361523712},
> "node":"localhost:8983_solr"}}
> {noformat}
> The SystemInfoHandler does this:
> {code}
> // Try some command line things:
> try {
>   if (!Constants.WINDOWS) {
> info.add( "uname",  execute( "uname -a" ) );
> info.add( "uptime", execute( 

[jira] [Updated] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14416:

Description: 
I sent a message about this on the mailing list a long time ago and got no 
replies.
Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
fixed in 8.5, but I will check.


On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
Windows 10

In the Nodes view of the Admin 
UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
However when you click it, the only thing that gets visibly refreshed is the 
'bar chart' (not sure what to call it - it's shown when you choose show 
details) of the index shard size on disk. The other stats do not update.
Also, when there is more than one node, only some of the node information is 
shown
 !image-2020-04-20-10-37-22-308.png!  

Firefox dev console shows:


{noformat}
_Error: s.system.uptime is undefined
nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
$eval@http://localhost:8983/solr/libs/angular.js:14406:16
$digest@http://localhost:8983/solr/libs/angular.js:14222:15
$apply@http://localhost:8983/solr/libs/angular.js:14511:13
done@http://localhost:8983/solr/libs/angular.js:9669:36
completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_
{noformat}


The system response has upTimeMs in it for the JVM/JMX properties, but no 
system/uptime


{noformat}
{
  "responseHeader":{
"status":0,
"QTime":63},
  "localhost:8983_solr":{
"responseHeader":{
  "status":0,
  "QTime":49},
"mode":"solrcloud",
"zkHost":"localhost:9983",
"solr_home":"...",
"lucene":{
  "solr-spec-version":"8.1.1",
  "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - ab 
- 2019-05-22 15:20:01",
  "lucene-spec-version":"8.1.1",
  "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
ab - 2019-05-22 15:15:24"},
"jvm":{
  "version":"1.8.0_211 25.211-b12",
  "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
  "spec":{
"vendor":"Oracle Corporation",
"name":"Java Platform API Specification",
"version":"1.8"},
  "jre":{
"vendor":"Oracle Corporation",
"version":"1.8.0_211"},
  "vm":{
"vendor":"Oracle Corporation",
"name":"Java HotSpot(TM) 64-Bit Server VM",
"version":"25.211-b12"},
  "processors":8,
  "memory":{
"free":"1.4 GB",
"total":"2 GB",
"max":"2 GB",
"used":"566.7 MB (%27.7)",
"raw":{
  "free":1553268432,
  "total":2147483648,
  "max":2147483648,
  "used":594215216,
  "used%":27.670302242040634}},
  "jmx":{
"bootclasspath":"...",
"classpath":"start.jar",
"commandLineArgs":[...],
"startTime":"2019-06-20T11:41:58.955Z",
"upTimeMS":516602}},
"system":{
  "name":"Windows 10",
  "arch":"amd64",
  "availableProcessors":8,
  "systemLoadAverage":-1.0,
  "version":"10.0",
  "committedVirtualMemorySize":2709114880,
  "freePhysicalMemorySize":16710127616,
  "freeSwapSpaceSize":16422531072,
  "processCpuLoad":0.13941671744473663,
  "processCpuTime":194609375000,
  "systemCpuLoad":0.25816002967796037,
  "totalPhysicalMemorySize":34261250048,
  "totalSwapSpaceSize":39361523712},
"node":"localhost:8983_solr"}}
{noformat}


The SystemInfoHandler does this:
{code}
// Try some command line things:
try {
  if (!Constants.WINDOWS) {
info.add( "uname",  execute( "uname -a" ) );
info.add( "uptime", execute( "uptime" ) );
  }
} catch( Exception ex ) {
  log.warn("Unable to execute command line tools to get operating system 
properties.", ex);
}
{code}

Which appears to be the problem since it won't return uname and uptime on 
windows, but the UI expects them

If I run uptime from my Ubuntu shell in WSL the output is like "16:41:40 up 7 
min,  0 users,  load average: 0.52, 0.58, 0.59". If I make the System handler 
return that then there are no further dev console errors...
However, even with that "fixed", refresh doesn't actually seem to refresh 
anything other than the graph.

In contrast, refreshing the System (e.g. memory) section on the main dashboard 
does correctly update.

The missing "uptime" from the response looks like the problem, but isn't 
actually stopping refresh from doing anything when I return an uptime. So, is 
the Nodes view supposed to be refreshing everything, or are my expectations 
wrong?

 

[jira] [Updated] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14416:

Description: 
I sent a message about this on the mailing list a long time ago and got no 
replies.
Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
fixed in 8.5, but I will check.


On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
Windows 10

In the Nodes view of the Admin 
UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
However when you click it, the only thing that gets visibly refreshed is the 
'bar chart' (not sure what to call it - it's shown when you choose show 
details) of the index shard size on disk. The other stats do not update.
Also, when there is more than one node, only some of the node information is 
shown
 !image-2020-04-20-10-37-22-308.png!  

Firefox dev console shows:

_Error: s.system.uptime is undefined
nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
$eval@http://localhost:8983/solr/libs/angular.js:14406:16
$digest@http://localhost:8983/solr/libs/angular.js:14222:15
$apply@http://localhost:8983/solr/libs/angular.js:14511:13
done@http://localhost:8983/solr/libs/angular.js:9669:36
completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_

The system response has upTimeMs in it for the JVM/JMX properties, but no 
system/uptime

{
  "responseHeader":{
"status":0,
"QTime":63},
  "localhost:8983_solr":{
"responseHeader":{
  "status":0,
  "QTime":49},
"mode":"solrcloud",
"zkHost":"localhost:9983",
"solr_home":"...",
"lucene":{
  "solr-spec-version":"8.1.1",
  "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - ab 
- 2019-05-22 15:20:01",
  "lucene-spec-version":"8.1.1",
  "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
ab - 2019-05-22 15:15:24"},
"jvm":{
  "version":"1.8.0_211 25.211-b12",
  "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
  "spec":{
"vendor":"Oracle Corporation",
"name":"Java Platform API Specification",
"version":"1.8"},
  "jre":{
"vendor":"Oracle Corporation",
"version":"1.8.0_211"},
  "vm":{
"vendor":"Oracle Corporation",
"name":"Java HotSpot(TM) 64-Bit Server VM",
"version":"25.211-b12"},
  "processors":8,
  "memory":{
"free":"1.4 GB",
"total":"2 GB",
"max":"2 GB",
"used":"566.7 MB (%27.7)",
"raw":{
  "free":1553268432,
  "total":2147483648,
  "max":2147483648,
  "used":594215216,
  "used%":27.670302242040634}},
  "jmx":{
"bootclasspath":"...",
"classpath":"start.jar",
"commandLineArgs":[...],
"startTime":"2019-06-20T11:41:58.955Z",
"upTimeMS":516602}},
"system":{
  "name":"Windows 10",
  "arch":"amd64",
  "availableProcessors":8,
  "systemLoadAverage":-1.0,
  "version":"10.0",
  "committedVirtualMemorySize":2709114880,
  "freePhysicalMemorySize":16710127616,
  "freeSwapSpaceSize":16422531072,
  "processCpuLoad":0.13941671744473663,
  "processCpuTime":194609375000,
  "systemCpuLoad":0.25816002967796037,
  "totalPhysicalMemorySize":34261250048,
  "totalSwapSpaceSize":39361523712},
"node":"localhost:8983_solr"}}

The SystemInfoHandler does this:
{code}
// Try some command line things:
try {
  if (!Constants.WINDOWS) {
info.add( "uname",  execute( "uname -a" ) );
info.add( "uptime", execute( "uptime" ) );
  }
} catch( Exception ex ) {
  log.warn("Unable to execute command line tools to get operating system 
properties.", ex);
}
{code}

Which appears to be the problem since it won't return uname and uptime on 
windows, but the UI expects them

If I run uptime from my Ubuntu shell in WSL the output is like "16:41:40 up 7 
min,  0 users,  load average: 0.52, 0.58, 0.59". If I make the System handler 
return that then there are no further dev console errors...
However, even with that "fixed", refresh doesn't actually seem to refresh 
anything other than the graph.

In contrast, refreshing the System (e.g. memory) section on the main dashboard 
does correctly update.

The missing "uptime" from the response looks like the problem, but isn't 
actually stopping refresh from doing anything. So, is the Nodes view supposed 
to be refreshing everything, or are my expectations wrong?

  was:
I sent a message about this on the mailing list a long time ago 

[jira] [Updated] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14416:

Description: 
I sent a message about this on the mailing list a long time ago and got no 
replies.
Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
fixed in 8.5, but I will check.


On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
Windows 10

In the Nodes view of the Admin 
UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
However when you click it, the only thing that gets visibly refreshed is the 
'bar chart' (not sure what to call it - it's shown when you choose show 
details) of the index shard size on disk. The other stats do not update.
Also, when there is more than one node, only some of the node information is 
shown
 !image-2020-04-20-10-37-22-308.png!  

Firefox dev console shows:


{noformat}
_Error: s.system.uptime is undefined
nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
$eval@http://localhost:8983/solr/libs/angular.js:14406:16
$digest@http://localhost:8983/solr/libs/angular.js:14222:15
$apply@http://localhost:8983/solr/libs/angular.js:14511:13
done@http://localhost:8983/solr/libs/angular.js:9669:36
completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_
{noformat}


The system response has upTimeMs in it for the JVM/JMX properties, but no 
system/uptime


{noformat}
{
  "responseHeader":{
"status":0,
"QTime":63},
  "localhost:8983_solr":{
"responseHeader":{
  "status":0,
  "QTime":49},
"mode":"solrcloud",
"zkHost":"localhost:9983",
"solr_home":"...",
"lucene":{
  "solr-spec-version":"8.1.1",
  "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - ab 
- 2019-05-22 15:20:01",
  "lucene-spec-version":"8.1.1",
  "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
ab - 2019-05-22 15:15:24"},
"jvm":{
  "version":"1.8.0_211 25.211-b12",
  "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
  "spec":{
"vendor":"Oracle Corporation",
"name":"Java Platform API Specification",
"version":"1.8"},
  "jre":{
"vendor":"Oracle Corporation",
"version":"1.8.0_211"},
  "vm":{
"vendor":"Oracle Corporation",
"name":"Java HotSpot(TM) 64-Bit Server VM",
"version":"25.211-b12"},
  "processors":8,
  "memory":{
"free":"1.4 GB",
"total":"2 GB",
"max":"2 GB",
"used":"566.7 MB (%27.7)",
"raw":{
  "free":1553268432,
  "total":2147483648,
  "max":2147483648,
  "used":594215216,
  "used%":27.670302242040634}},
  "jmx":{
"bootclasspath":"...",
"classpath":"start.jar",
"commandLineArgs":[...],
"startTime":"2019-06-20T11:41:58.955Z",
"upTimeMS":516602}},
"system":{
  "name":"Windows 10",
  "arch":"amd64",
  "availableProcessors":8,
  "systemLoadAverage":-1.0,
  "version":"10.0",
  "committedVirtualMemorySize":2709114880,
  "freePhysicalMemorySize":16710127616,
  "freeSwapSpaceSize":16422531072,
  "processCpuLoad":0.13941671744473663,
  "processCpuTime":194609375000,
  "systemCpuLoad":0.25816002967796037,
  "totalPhysicalMemorySize":34261250048,
  "totalSwapSpaceSize":39361523712},
"node":"localhost:8983_solr"}}
{noformat}


The SystemInfoHandler does this:
{code}
// Try some command line things:
try {
  if (!Constants.WINDOWS) {
info.add( "uname",  execute( "uname -a" ) );
info.add( "uptime", execute( "uptime" ) );
  }
} catch( Exception ex ) {
  log.warn("Unable to execute command line tools to get operating system 
properties.", ex);
}
{code}

Which appears to be the problem since it won't return uname and uptime on 
windows, but the UI expects them

If I run uptime from my Ubuntu shell in WSL the output is like "16:41:40 up 7 
min,  0 users,  load average: 0.52, 0.58, 0.59". If I make the System handler 
return that then there are no further dev console errors...
However, even with that "fixed", refresh doesn't actually seem to refresh 
anything other than the graph.

In contrast, refreshing the System (e.g. memory) section on the main dashboard 
does correctly update.

The missing "uptime" from the response looks like the problem, but isn't 
actually stopping refresh from doing anything. So, is the Nodes view supposed 
to be refreshing everything, or are my expectations wrong?

  was:
I sent a message 

[jira] [Created] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Colvin Cowie (Jira)
Colvin Cowie created SOLR-14416:
---

 Summary: Nodes view doesn't work correctly when Solr is hosted on 
Windows
 Key: SOLR-14416
 URL: https://issues.apache.org/jira/browse/SOLR-14416
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Colvin Cowie
 Attachments: image-2020-04-20-10-37-22-308.png

I sent a message about this on the mailing list a long time ago and got no 
replies.
Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
fixed in 8.5, but I will check.


On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
Windows 10

In the Nodes view of the Admin 
UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
However when you click it, the only thing that gets visibly refreshed is the 
'bar chart' (not sure what to call it - it's shown when you choose show 
details) of the index shard size on disk. The other stats do not update.
Also, when there is more than one node, only some of the node information is 
shown
 !image-2020-04-20-10-37-22-308.png!  

Firefox dev console shows:

_Error: s.system.uptime is undefined
nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
$eval@http://localhost:8983/solr/libs/angular.js:14406:16
$digest@http://localhost:8983/solr/libs/angular.js:14222:15
$apply@http://localhost:8983/solr/libs/angular.js:14511:13
done@http://localhost:8983/solr/libs/angular.js:9669:36
completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_

The system response has upTimeMs in it for the JVM/JMX properties, but no 
system/uptime

{
  "responseHeader":{
"status":0,
"QTime":63},
  "localhost:8983_solr":{
"responseHeader":{
  "status":0,
  "QTime":49},
"mode":"solrcloud",
"zkHost":"localhost:9983",
"solr_home":"...",
"lucene":{
  "solr-spec-version":"8.1.1",
  "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - ab 
- 2019-05-22 15:20:01",
  "lucene-spec-version":"8.1.1",
  "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
ab - 2019-05-22 15:15:24"},
"jvm":{
  "version":"1.8.0_211 25.211-b12",
  "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
  "spec":{
"vendor":"Oracle Corporation",
"name":"Java Platform API Specification",
"version":"1.8"},
  "jre":{
"vendor":"Oracle Corporation",
"version":"1.8.0_211"},
  "vm":{
"vendor":"Oracle Corporation",
"name":"Java HotSpot(TM) 64-Bit Server VM",
"version":"25.211-b12"},
  "processors":8,
  "memory":{
"free":"1.4 GB",
"total":"2 GB",
"max":"2 GB",
"used":"566.7 MB (%27.7)",
"raw":{
  "free":1553268432,
  "total":2147483648,
  "max":2147483648,
  "used":594215216,
  "used%":27.670302242040634}},
  "jmx":{
"bootclasspath":"...",
"classpath":"start.jar",
"commandLineArgs":[...],
"startTime":"2019-06-20T11:41:58.955Z",
"upTimeMS":516602}},
"system":{
  "name":"Windows 10",
  "arch":"amd64",
  "availableProcessors":8,
  "systemLoadAverage":-1.0,
  "version":"10.0",
  "committedVirtualMemorySize":2709114880,
  "freePhysicalMemorySize":16710127616,
  "freeSwapSpaceSize":16422531072,
  "processCpuLoad":0.13941671744473663,
  "processCpuTime":194609375000,
  "systemCpuLoad":0.25816002967796037,
  "totalPhysicalMemorySize":34261250048,
  "totalSwapSpaceSize":39361523712},
"node":"localhost:8983_solr"}}

The SystemInfoHandler does this:
// Try some command line things:
try {
  if (!Constants.WINDOWS) {
info.add( "uname",  execute( "uname -a" ) );
info.add( "uptime", execute( "uptime" ) );
  }
} catch( Exception ex ) {
  log.warn("Unable to execute command line tools to get operating system 
properties.", ex);
}

Which appears to be the problem.

If I run uptime from my Ubuntu shell in WSL the output is like "16:41:40 up 7 
min,  0 users,  load average: 0.52, 0.58, 0.59". If I make the System handler 
return that then there are no further dev console errors...
However, even with that "fixed", refresh doesn't actually seem to refresh 
anything other than the graph.

In contrast, refreshing the System (e.g. memory) section on the main dashboard 
does correctly update.

The missing "uptime" from the response looks like the problem, but isn't 
actually stopping refresh from 

[jira] [Updated] (SOLR-14416) Nodes view doesn't work correctly when Solr is hosted on Windows

2020-04-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14416?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14416:

Component/s: Admin UI

> Nodes view doesn't work correctly when Solr is hosted on Windows
> 
>
> Key: SOLR-14416
> URL: https://issues.apache.org/jira/browse/SOLR-14416
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: image-2020-04-20-10-37-22-308.png
>
>
> I sent a message about this on the mailing list a long time ago and got no 
> replies.
> Originally I saw it on 8.1.1, it's a problem in 8.3.1 and I don't expect it's 
> fixed in 8.5, but I will check.
> On Solr 8.1.1 / 7.7.2 with Oracle 1.8.0_191 25.191-b12 with Solr running on 
> Windows 10
> In the Nodes view of the Admin 
> UI,http://localhost:8983/solr/#/~cloud?view=nodes there is a refresh button. 
> However when you click it, the only thing that gets visibly refreshed is the 
> 'bar chart' (not sure what to call it - it's shown when you choose show 
> details) of the index shard size on disk. The other stats do not update.
> Also, when there is more than one node, only some of the node information is 
> shown
>  !image-2020-04-20-10-37-22-308.png!  
> Firefox dev console shows:
> _Error: s.system.uptime is undefined
> nodesSubController/$scope.reload/<@http://localhost:8983/solr/js/angular/controllers/cloud.js:384:11
> v/http://localhost:8983/solr/libs/angular-resource.min.js:33:133
> processQueue@http://localhost:8983/solr/libs/angular.js:13193:27
> scheduleProcessQueue/<@http://localhost:8983/solr/libs/angular.js:13209:27
> $eval@http://localhost:8983/solr/libs/angular.js:14406:16
> $digest@http://localhost:8983/solr/libs/angular.js:14222:15
> $apply@http://localhost:8983/solr/libs/angular.js:14511:13
> done@http://localhost:8983/solr/libs/angular.js:9669:36
> completeRequest@http://localhost:8983/solr/libs/angular.js:9859:7
> requestLoaded@http://localhost:8983/solr/libs/angular.js:9800:9_
> The system response has upTimeMs in it for the JVM/JMX properties, but no 
> system/uptime
> {
>   "responseHeader":{
> "status":0,
> "QTime":63},
>   "localhost:8983_solr":{
> "responseHeader":{
>   "status":0,
>   "QTime":49},
> "mode":"solrcloud",
> "zkHost":"localhost:9983",
> "solr_home":"...",
> "lucene":{
>   "solr-spec-version":"8.1.1",
>   "solr-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:20:01",
>   "lucene-spec-version":"8.1.1",
>   "lucene-impl-version":"8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef - 
> ab - 2019-05-22 15:15:24"},
> "jvm":{
>   "version":"1.8.0_211 25.211-b12",
>   "name":"Oracle Corporation Java HotSpot(TM) 64-Bit Server VM",
>   "spec":{
> "vendor":"Oracle Corporation",
> "name":"Java Platform API Specification",
> "version":"1.8"},
>   "jre":{
> "vendor":"Oracle Corporation",
> "version":"1.8.0_211"},
>   "vm":{
> "vendor":"Oracle Corporation",
> "name":"Java HotSpot(TM) 64-Bit Server VM",
> "version":"25.211-b12"},
>   "processors":8,
>   "memory":{
> "free":"1.4 GB",
> "total":"2 GB",
> "max":"2 GB",
> "used":"566.7 MB (%27.7)",
> "raw":{
>   "free":1553268432,
>   "total":2147483648,
>   "max":2147483648,
>   "used":594215216,
>   "used%":27.670302242040634}},
>   "jmx":{
> "bootclasspath":"...",
> "classpath":"start.jar",
> "commandLineArgs":[...],
> "startTime":"2019-06-20T11:41:58.955Z",
> "upTimeMS":516602}},
> "system":{
>   "name":"Windows 10",
>   "arch":"amd64",
>   "availableProcessors":8,
>   "systemLoadAverage":-1.0,
>   "version":"10.0",
>   "committedVirtualMemorySize":2709114880,
>   "freePhysicalMemorySize":16710127616,
>   "freeSwapSpaceSize":16422531072,
>   "processCpuLoad":0.13941671744473663,
>   "processCpuTime":194609375000,
>   "systemCpuLoad":0.25816002967796037,
>   "totalPhysicalMemorySize":34261250048,
>   "totalSwapSpaceSize":39361523712},
> "node":"localhost:8983_solr"}}
> The SystemInfoHandler does this:
> // Try some command line things:
> try {
>   if (!Constants.WINDOWS) {
> info.add( "uname",  execute( "uname -a" ) );
> info.add( "uptime", execute( "uptime" ) );
>   }
> } catch( Exception ex ) {
>   log.warn("Unable to execute command line tools to get operating system 
> properties.", ex);
> }
> Which appears to be the problem.
> If I run uptime from my Ubuntu shell in WSL the output is like