[GitHub] [lucene-solr] yonik merged pull request #1369: SOLR-14213: Allow enabling shared store to be scriptable

2020-03-23 Thread GitBox
yonik merged pull request #1369: SOLR-14213: Allow enabling shared store to be 
scriptable
URL: https://github.com/apache/lucene-solr/pull/1369
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14345) Error messages are not properly propagated with non-default response parsers

2020-03-23 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17065101#comment-17065101
 ] 

Lucene/Solr QA commented on SOLR-14345:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m  8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m  8s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 48m  
2s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
52s{color} | {color:green} solrj in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
17s{color} | {color:green} test-framework in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-14345 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12997448/SOLR-14345.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 68e43044537 |
| ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/724/testReport/ |
| modules | C: solr/core solr/solrj solr/test-framework U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/724/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Error messages are not properly propagated with non-default response parsers
> 
>
> Key: SOLR-14345
> URL: https://issues.apache.org/jira/browse/SOLR-14345
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Attachments: SOLR-14345.patch, SOLR-14345.patch, SOLR-14345.patch
>
>
> Default {{ResponsParseer}} is {{BinaryResponseParser}}. when non-default 
> response parser is specified in the request then, the error message is 
> propagated to user. This happens in solrCloud mode.
> I came across this problem when working on adding some test which uses 
> {{SolrTestCaseHS}} but similar problem exists with SolrJ client
> Also, same problem exists in both HttpSolrClient and Http2SolrClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: Collectors to skip noncompetitive documents

2020-03-23 Thread GitBox
mayya-sharipova commented on a change in pull request #1351: LUCENE-9280: 
Collectors to skip noncompetitive documents
URL: https://github.com/apache/lucene-solr/pull/1351#discussion_r396687134
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/FieldComparator.java
 ##
 @@ -928,4 +928,9 @@ public int compareTop(int doc) throws IOException {
 @Override
 public void setScorer(Scorable scorer) {}
   }
+
+  public static abstract class IteratorSupplierComparator extends 
FieldComparator implements LeafFieldComparator {
+abstract DocIdSetIterator iterator();
+abstract void updateIterator() throws IOException;
 
 Review comment:
   @msokolov  Thanks for the suggestion, naming is tough, addressed in 95e1bc1.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] epugh commented on issue #1033: SOLR-13965: Use Plugin to add new expressions to GraphHandler

2020-03-23 Thread GitBox
epugh commented on issue #1033: SOLR-13965: Use Plugin to add new expressions 
to GraphHandler
URL: https://github.com/apache/lucene-solr/pull/1033#issuecomment-602789337
 
 
   Okay, after digging around, I think this was committed to Master, so this PR 
can be closed?  Sorry about the noise @cpoerschke and @madrob 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] epugh commented on issue #1033: SOLR-13965: Use Plugin to add new expressions to GraphHandler

2020-03-23 Thread GitBox
epugh commented on issue #1033: SOLR-13965: Use Plugin to add new expressions 
to GraphHandler
URL: https://github.com/apache/lucene-solr/pull/1033#issuecomment-602787911
 
 
   Okay, @madrob or @cpoerschke how does this look?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9286) FST construction explodes memory in BitTable

2020-03-23 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064998#comment-17064998
 ] 

Dawid Weiss commented on LUCENE-9286:
-

Only time is relevant here - it's the wall clock time of a given block of 
code's execution. The rest shows percentage of overall "job" time and offset 
since the start of the first task - I used our console reporting utilities 
because it was simpler for me to copy/paste but you can definitely change it 
into something else.

I didn't do any special warmups or anything so the above times should be taken 
with a grain of salt. But they do reflect what I see in our production system 
as well (and you can add a warmup round if you wish).

If you have a spare cycle, given the bizarre circumstances, please go ahead and 
have a look. I'm working on something else at the moment and can't devote much 
time to understanding why this slowdown takes place but I'd be interested in 
any findings!

> FST construction explodes memory in BitTable
> 
>
> Key: LUCENE-9286
> URL: https://issues.apache.org/jira/browse/LUCENE-9286
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
> Attachments: screen-[1].png
>
>
> I see a dramatic increase in the amount of memory required for construction 
> of (arguably large) automata. It currently OOMs with 8GB of memory consumed 
> for bit tables. I am pretty sure this didn't require so much memory before 
> (the automaton is ~50MB after construction).
> Something bad happened in between. Thoughts, [~broustant], [~sokolov]?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] HoustonPutman commented on issue #1372: SOLR-14358 Document how to use Processor in JavaDocs

2020-03-23 Thread GitBox
HoustonPutman commented on issue #1372: SOLR-14358 Document how to use 
Processor in JavaDocs
URL: https://github.com/apache/lucene-solr/pull/1372#issuecomment-602768597
 
 
   The [URP refguide 
page](https://lucene.apache.org/solr/guide/8_4/update-request-processors.html#general-use-updateprocessorfactories)
 links to each of the individual processors' Javadocs. So for now I don't see 
an issue with this documentation living in the Javadocs, until the 
documentation for all the processors gets mass-migrated to the ref-guide.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] andyvuong commented on a change in pull request #1369: SOLR-14213: Allow enabling shared store to be scriptable

2020-03-23 Thread GitBox
andyvuong commented on a change in pull request #1369: SOLR-14213: Allow 
enabling shared store to be scriptable
URL: https://github.com/apache/lucene-solr/pull/1369#discussion_r396649036
 
 

 ##
 File path: solr/core/src/java/org/apache/solr/core/SolrXmlConfig.java
 ##
 @@ -568,4 +568,17 @@ private static PluginInfo 
getTracerPluginInfo(XmlConfigFile config) {
 Node node = config.getNode("solr/tracerConfig", false);
 return (node == null) ? null : new PluginInfo(node, "tracerConfig", false, 
true);
   }
+  
+  private static SharedStoreConfig loadSharedStoreConfig(NamedList nl) 
{
 
 Review comment:
   fyi I moved the actual initialization method from SharedStoreConfig to 
SolrXmlConfig to avoid duplicating the existing field extraction methods or 
changing their visibility and so it aligns with how the rest of the section 
objects are being created. I've kept the SharedStoreConfig since I anticipate 
we'll start using it once we complete the enabling configurable's work. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on issue #1372: SOLR-14358 Document how to use Processor in JavaDocs

2020-03-23 Thread GitBox
janhoy commented on issue #1372: SOLR-14358 Document how to use Processor in 
JavaDocs
URL: https://github.com/apache/lucene-solr/pull/1372#issuecomment-602762093
 
 
   Great! Just wonder if this doc would also fit in the RefGuide? Users will 
not always care about java docs.
   
   In the example, it would be better if the url_xxx fields were defined as 
single valued and numeric where it makes sense. A typical use case is to use 
these for various query boosts. Perhaps that should be mentioned too?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dsmiley commented on a change in pull request #1373: SOLR-14340 - ZkStateReader.readConfigName is doing too much work

2020-03-23 Thread GitBox
dsmiley commented on a change in pull request #1373: SOLR-14340 - 
ZkStateReader.readConfigName is doing too much work
URL: https://github.com/apache/lucene-solr/pull/1373#discussion_r396633704
 
 

 ##
 File path: solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
 ##
 @@ -294,10 +294,6 @@ public String readConfigName(String collection) throws 
KeeperException {
 log.debug("Loading collection config from: [{}]", path);
 
 try {
-  if (zkClient.exists(path, true) == false) {
-log.warn("No collection found at path {}.", path);
-throw new KeeperException.NoNodeException("No collection found at 
path: " + path);
-  }
   byte[] data = zkClient.getData(path, null, null, true);
 
 Review comment:
   I suggest leaving this because it'll be quite moot once the configSet 
setting moves to state.json.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mariemat commented on a change in pull request #1373: SOLR-14340 - ZkStateReader.readConfigName is doing too much work

2020-03-23 Thread GitBox
mariemat commented on a change in pull request #1373: SOLR-14340 - 
ZkStateReader.readConfigName is doing too much work
URL: https://github.com/apache/lucene-solr/pull/1373#discussion_r396631871
 
 

 ##
 File path: solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java
 ##
 @@ -294,10 +294,6 @@ public String readConfigName(String collection) throws 
KeeperException {
 log.debug("Loading collection config from: [{}]", path);
 
 try {
-  if (zkClient.exists(path, true) == false) {
-log.warn("No collection found at path {}.", path);
-throw new KeeperException.NoNodeException("No collection found at 
path: " + path);
-  }
   byte[] data = zkClient.getData(path, null, null, true);
 
 Review comment:
   As the NoNodeException was still thrown, and the message, even not as 
precise, is still pretty clear, I left it that way. But if you believe it can 
be useful, I can add that check with the proper message. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14347) Autoscaling placement wrong when concurrent replica placements are calculated

2020-03-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064967#comment-17064967
 ] 

ASF subversion and git services commented on SOLR-14347:


Commit 68e430445370dc9467585e5a352f471e24e89111 in lucene-solr's branch 
refs/heads/master from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=68e4304 ]

SOLR-14347: Autoscaling placement wrong when concurrent replica placements are 
calculated.


> Autoscaling placement wrong when concurrent replica placements are calculated
> -
>
> Key: SOLR-14347
> URL: https://issues.apache.org/jira/browse/SOLR-14347
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 8.5
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Attachments: SOLR-14347.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Steps to reproduce:
>  * create a cluster of a few nodes (tested with 7 nodes)
>  * define per-collection policies that distribute replicas exclusively on 
> different nodes per policy
>  * concurrently create a few collections, each using a different policy
>  * resulting replica placement will be seriously wrong, causing many policy 
> violations
> Running the same scenario but instead creating collections sequentially 
> results in no violations.
> I suspect this is caused by incorrect locking level for all collection 
> operations (as defined in {{CollectionParams.CollectionAction}}) that create 
> new replica placements - i.e. CREATE, ADDREPLICA, MOVEREPLICA, DELETENODE, 
> REPLACENODE, SPLITSHARD, RESTORE, REINDEXCOLLECTION. All of these operations 
> use the policy engine to create new replica placements, and as a result they 
> change the cluster state. However, currently these operations are locked (in 
> {{OverseerCollectionMessageHandler.lockTask}} ) using 
> {{LockLevel.COLLECTION}}. In practice this means that the lock is held only 
> for the particular collection that is being modified.
> A straightforward fix for this issue is to change the locking level to 
> CLUSTER (and I confirm this fixes the scenario described above). However, 
> this effectively serializes all collection operations listed above, which 
> will result in general slow-down of all collection operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14341) Move a collection's configSet name to state.json

2020-03-23 Thread Houston Putman (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064965#comment-17064965
 ] 

Houston Putman commented on SOLR-14341:
---

+1 as well. This would definitely be a cleaner implementation.

> Move a collection's configSet name to state.json
> 
>
> Key: SOLR-14341
> URL: https://issues.apache.org/jira/browse/SOLR-14341
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Major
>
> It's a bit odd that a collection's state.json knows everything about a 
> collection except for perhaps the most important pointer -- the configSet 
> name.  Presently the configSet name is retrieved via 
> {{ZkStateReader.getConfigName(collectionName)}} which looks at the zk path 
> {{/collections/collectionName}} (an intermediate node) interpreted as a 
> trivial JSON object.  Combining the configSet name into state.json is simpler 
> and also more efficient since many calls to grab the configset name _already_ 
> need the state.json (via a DocCollection object).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9286) FST construction explodes memory in BitTable

2020-03-23 Thread Bruno Roustant (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064948#comment-17064948
 ] 

Bruno Roustant commented on LUCENE-9286:


Thanks [~dweiss] for this investigation. I'll look at that.

What does "[Time]", "[%]" and "[+T0]" above mean?

> FST construction explodes memory in BitTable
> 
>
> Key: LUCENE-9286
> URL: https://issues.apache.org/jira/browse/LUCENE-9286
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
> Attachments: screen-[1].png
>
>
> I see a dramatic increase in the amount of memory required for construction 
> of (arguably large) automata. It currently OOMs with 8GB of memory consumed 
> for bit tables. I am pretty sure this didn't require so much memory before 
> (the automaton is ~50MB after construction).
> Something bad happened in between. Thoughts, [~broustant], [~sokolov]?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14347) Autoscaling placement wrong when concurrent replica placements are calculated

2020-03-23 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064940#comment-17064940
 ] 

Andrzej Bialecki edited comment on SOLR-14347 at 3/23/20, 4:51 PM:
---

It turns out that the bug was caused by the fact that per-collection policies 
are applied during calculations to the cached {{Session}} instance and cause 
side-effects that later affect calculations for other collections.

Setting the LockLevel.CLUSTER fixed this because all computations became 
sequential, but at a relatively high cost of blocking all other CLUSTER level 
operations. It appears that re-creating a {{Policy.Session}} in 
{{PolicyHelper.getReplicaLocations(...)}} fixes this behavior too, because the 
new Session doesn't carry over the side-effects from previous per-collection 
policies. There is a slight performance impact of this approach, because 
re-creating a Session is costly for large clusters, but it's less intrusive 
than locking out all other CLUSTER level ops.

We may re-visit this issue at some point to reduce this cost, but I think this 
fix at least protects us from the current completely wrong behavior.


was (Author: ab):
It turns out that the bug was caused by the fact that per-collection policies 
are applied during calculations and cause side-effects that later affect 
calculations for other collections.

Setting the LockLevel.CLUSTER fixed this because all computations became 
sequential, but at a relatively high cost of blocking all other CLUSTER level 
operations. It appears that re-creating a {{Policy.Session}} in 
{{PolicyHelper.getReplicaLocations(...)}} fixes this behavior too, because the 
new Session doesn't carry over the side-effects from previous per-collection 
policies. There is a slight performance impact of this approach, because 
re-creating a Session is costly for large clusters, but it's less intrusive 
than locking out all other CLUSTER level ops.

We may re-visit this issue at some point to reduce this cost, but I think this 
fix at least protects us from the current completely wrong behavior.

> Autoscaling placement wrong when concurrent replica placements are calculated
> -
>
> Key: SOLR-14347
> URL: https://issues.apache.org/jira/browse/SOLR-14347
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 8.5
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Attachments: SOLR-14347.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Steps to reproduce:
>  * create a cluster of a few nodes (tested with 7 nodes)
>  * define per-collection policies that distribute replicas exclusively on 
> different nodes per policy
>  * concurrently create a few collections, each using a different policy
>  * resulting replica placement will be seriously wrong, causing many policy 
> violations
> Running the same scenario but instead creating collections sequentially 
> results in no violations.
> I suspect this is caused by incorrect locking level for all collection 
> operations (as defined in {{CollectionParams.CollectionAction}}) that create 
> new replica placements - i.e. CREATE, ADDREPLICA, MOVEREPLICA, DELETENODE, 
> REPLACENODE, SPLITSHARD, RESTORE, REINDEXCOLLECTION. All of these operations 
> use the policy engine to create new replica placements, and as a result they 
> change the cluster state. However, currently these operations are locked (in 
> {{OverseerCollectionMessageHandler.lockTask}} ) using 
> {{LockLevel.COLLECTION}}. In practice this means that the lock is held only 
> for the particular collection that is being modified.
> A straightforward fix for this issue is to change the locking level to 
> CLUSTER (and I confirm this fixes the scenario described above). However, 
> this effectively serializes all collection operations listed above, which 
> will result in general slow-down of all collection operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14347) Autoscaling placement wrong when concurrent replica placements are calculated

2020-03-23 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064940#comment-17064940
 ] 

Andrzej Bialecki commented on SOLR-14347:
-

It turns out that the bug was caused by the fact that per-collection policies 
are applied during calculations and cause side-effects that later affect 
calculations for other collections.

Setting the LockLevel.CLUSTER fixed this because all computations became 
sequential, but at a relatively high cost of blocking all other CLUSTER level 
operations. It appears that re-creating a {{Policy.Session}} in 
{{PolicyHelper.getReplicaLocations(...)}} fixes this behavior too, because the 
new Session doesn't carry over the side-effects from previous per-collection 
policies. There is a slight performance impact of this approach, because 
re-creating a Session is costly for large clusters, but it's less intrusive 
than locking out all other CLUSTER level ops.

We may re-visit this issue at some point to reduce this cost, but I think this 
fix at least protects us from the current completely wrong behavior.

> Autoscaling placement wrong when concurrent replica placements are calculated
> -
>
> Key: SOLR-14347
> URL: https://issues.apache.org/jira/browse/SOLR-14347
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 8.5
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Attachments: SOLR-14347.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Steps to reproduce:
>  * create a cluster of a few nodes (tested with 7 nodes)
>  * define per-collection policies that distribute replicas exclusively on 
> different nodes per policy
>  * concurrently create a few collections, each using a different policy
>  * resulting replica placement will be seriously wrong, causing many policy 
> violations
> Running the same scenario but instead creating collections sequentially 
> results in no violations.
> I suspect this is caused by incorrect locking level for all collection 
> operations (as defined in {{CollectionParams.CollectionAction}}) that create 
> new replica placements - i.e. CREATE, ADDREPLICA, MOVEREPLICA, DELETENODE, 
> REPLACENODE, SPLITSHARD, RESTORE, REINDEXCOLLECTION. All of these operations 
> use the policy engine to create new replica placements, and as a result they 
> change the cluster state. However, currently these operations are locked (in 
> {{OverseerCollectionMessageHandler.lockTask}} ) using 
> {{LockLevel.COLLECTION}}. In practice this means that the lock is held only 
> for the particular collection that is being modified.
> A straightforward fix for this issue is to change the locking level to 
> CLUSTER (and I confirm this fixes the scenario described above). However, 
> this effectively serializes all collection operations listed above, which 
> will result in general slow-down of all collection operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mariemat opened a new pull request #1373: SOLR-14340 - ZkStateReader.readConfigName is doing too much work

2020-03-23 Thread GitBox
mariemat opened a new pull request #1373: SOLR-14340 - 
ZkStateReader.readConfigName is doing too much work
URL: https://github.com/apache/lucene-solr/pull/1373
 
 
   # Description
   
   Reduced the scope of ZkStateReader.readConfigName() as described in the JIRA 
by David.
   
   
   # Solution
   
   * Removed the check to ensure that the Zk Node exists.
   * Removed the check for the existence of the Configset entry.
   In a local test with 600 collections, it saved 2/5 of the GETCLUSTERSTATUS  
execution time.
   
   # Tests
   
   Removed an assert in TestCollectionAPI that tests that a collection with 
missing configset are not returned. That behavior will not happen anymore.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [ X] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [  ] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [  ] I have given Solr maintainers 
[access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
 to contribute to my PR branch. (optional but recommended)
   - [ X] I have developed this patch against the `master` branch.
   - [ X] I have run `ant precommit` and the appropriate test suite.
   - [  ] I have added tests for my changes.
   - [  ] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] epugh commented on issue #504: SOLR-13008 Indent JSON/XML formatted field value if indent=true and using DocumentTransformer

2020-03-23 Thread GitBox
epugh commented on issue #504: SOLR-13008 Indent JSON/XML formatted field value 
if indent=true and using DocumentTransformer
URL: https://github.com/apache/lucene-solr/pull/504#issuecomment-602692672
 
 
   This whole PR may be too icky to really press forward with.  Will think 
about it some more if htere is a better route.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] epugh commented on issue #1302: SOLR-14294 fix typo in message

2020-03-23 Thread GitBox
epugh commented on issue #1302: SOLR-14294 fix typo in message
URL: https://github.com/apache/lucene-solr/pull/1302#issuecomment-602688633
 
 
   Hey @madrob any chance of getting this committed?  I've got a stack of open 
PR's I'd like to get dealt with, one way or the other ;-)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] epugh commented on issue #404: Comment to explain how to use URLClassifyProcessorFactory

2020-03-23 Thread GitBox
epugh commented on issue #404: Comment to explain how to use 
URLClassifyProcessorFactory
URL: https://github.com/apache/lucene-solr/pull/404#issuecomment-602687499
 
 
   @janhoy I went ahead and made the changes to @ohtwadi original PR, and it's 
available at https://github.com/apache/lucene-solr/pull/1372.   I think you can 
close this PR in favor of #1372 !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] epugh edited a comment on issue #404: Comment to explain how to use URLClassifyProcessorFactory

2020-03-23 Thread GitBox
epugh edited a comment on issue #404: Comment to explain how to use 
URLClassifyProcessorFactory
URL: https://github.com/apache/lucene-solr/pull/404#issuecomment-602687499
 
 
   @janhoy I went ahead and made the changes to @ohtwadi original PR, and 
created SOLR-14358, and it's available at 
https://github.com/apache/lucene-solr/pull/1372.   I think you can close this 
PR in favor of #1372 !


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] epugh opened a new pull request #1372: SOLR-14358 Document how to use Processor in JavaDocs

2020-03-23 Thread GitBox
epugh opened a new pull request #1372: SOLR-14358 Document how to use Processor 
in JavaDocs
URL: https://github.com/apache/lucene-solr/pull/1372
 
 
   
   # Description
   
   There is an existing PR (https://github.com/apache/lucene-solr/pull/404) 
from 2018 that needed some updates to the JavaDoc formatting.
   
   # Solution
   
   I fixed the formatting.
   
   # Tests
   
   ran `ant documentation`
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [ X] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of myx ability.
   - [ x] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [ X] I have given Solr maintainers 
[access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
 to contribute to my PR branch. (optional but recommended)
   - [ X] I have developed this patch against the `master` branch.
   - [ X] I have run `ant precommit` and the appropriate test suite.
   - [ ] I have added tests for my changes.
   - [ ] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11775) json.facet can use inconsistent Long/Integer for "count" depending on shard count

2020-03-23 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N reassigned SOLR-11775:
---

Assignee: Munendra S N

> json.facet can use inconsistent Long/Integer for "count" depending on shard 
> count
> -
>
> Key: SOLR-11775
> URL: https://issues.apache.org/jira/browse/SOLR-11775
> Project: Solr
>  Issue Type: Bug
>  Components: Facet Module
>Reporter: Chris M. Hostetter
>Assignee: Munendra S N
>Priority: Major
>
> (NOTE: I noticed this while working on a test for {{type: range}} but it's 
> possible other facet types may be affected as well)
> When dealing with a single core request -- either standalone or a collection 
> with only one shard -- json.facet seems to use "Integer" objects to return 
> the "count" of facet buckets, however if the shard count is increased then 
> the end client gets a "Long" object for the "count"
> (This isn't noticable when using {{wt=json}} but can be very problematic when 
> trying to write client code using {{wt=xml}} or SolrJ



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14345) Error messages are not properly propagated with non-default response parsers

2020-03-23 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064861#comment-17064861
 ] 

Munendra S N commented on SOLR-14345:
-

 [^SOLR-14345.patch] 
Slightly improved patch but doesn't yet support NoOpresponseParser

> Error messages are not properly propagated with non-default response parsers
> 
>
> Key: SOLR-14345
> URL: https://issues.apache.org/jira/browse/SOLR-14345
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Attachments: SOLR-14345.patch, SOLR-14345.patch, SOLR-14345.patch
>
>
> Default {{ResponsParseer}} is {{BinaryResponseParser}}. when non-default 
> response parser is specified in the request then, the error message is 
> propagated to user. This happens in solrCloud mode.
> I came across this problem when working on adding some test which uses 
> {{SolrTestCaseHS}} but similar problem exists with SolrJ client
> Also, same problem exists in both HttpSolrClient and Http2SolrClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14345) Error messages are not properly propagated with non-default response parsers

2020-03-23 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-14345:

Attachment: SOLR-14345.patch

> Error messages are not properly propagated with non-default response parsers
> 
>
> Key: SOLR-14345
> URL: https://issues.apache.org/jira/browse/SOLR-14345
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Attachments: SOLR-14345.patch, SOLR-14345.patch, SOLR-14345.patch
>
>
> Default {{ResponsParseer}} is {{BinaryResponseParser}}. when non-default 
> response parser is specified in the request then, the error message is 
> propagated to user. This happens in solrCloud mode.
> I came across this problem when working on adding some test which uses 
> {{SolrTestCaseHS}} but similar problem exists with SolrJ client
> Also, same problem exists in both HttpSolrClient and Http2SolrClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13183) NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter

2020-03-23 Thread Christian Hafner-Sprengholz (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064838#comment-17064838
 ] 

Christian Hafner-Sprengholz edited comment on SOLR-13183 at 3/23/20, 3:02 PM:
--

Thanks very much [~janhoy] for your answers and the analysis.
The Jetty interception is sth I can't change on my own...

So what can be done?


was (Author: haspre):
Thanks very much [~janhoy] for your answers ant the analysis.
The Jetty interception is sth I can't change on my own...

> NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter
> 
>
> Key: SOLR-13183
> URL: https://issues.apache.org/jira/browse/SOLR-13183
> Project: Solr
>  Issue Type: Bug
>Affects Versions: master (9.0)
>Reporter: Cesar Rodriguez
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/schema/%25
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:403)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> [...]
> {noformat}
> Function SolrDisplatchFilter.doFilter(), line 403 calls methods forward() on 
> a null pointer. The problem happens because 
> ServletRequestWrapper.getRequestDispatcher(), line 338 returns null. And that 
> happens because 
> org.eclipse.jetty.server.handler.ContextHandler.Context.getRequestDispatcher()
>  returns a null pointer. This happens because 
> org.eclipse.jetty.http.HttpURI.getDecodedPath() tries to decode the string 
> {{/solr/films/schema/%}}, which is an invalid encoding.
> I don’t fully follow the logic of the code but it seems that the 
> percent-encoding of the URL has first been decoded and then it’s being 
> decoded again?
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].
> h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14343) Properly set initCapacity in NamedList

2020-03-23 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-14343:

Fix Version/s: 8.6
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Properly set initCapacity in NamedList
> --
>
> Key: SOLR-14343
> URL: https://issues.apache.org/jira/browse/SOLR-14343
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Minor
> Fix For: 8.6
>
> Attachments: SOLR-14343.patch
>
>
> In {{NamedList(Map)}}, list is initialised to map.size() instead of 2 times 
> the size. There are other few instances where initial capacity can be 
> correctly set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14348) Split TestJsonFacets to multiple Test Classes

2020-03-23 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-14348:

Fix Version/s: 8.6
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

> Split TestJsonFacets to multiple Test Classes
> -
>
> Key: SOLR-14348
> URL: https://issues.apache.org/jira/browse/SOLR-14348
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14348.patch
>
>
> {{TestJsonFacets}} has parameterized testing. It runs each tests for each 
> facet.method. There are error cases which doesn't actually need it. Also, 
> facet.method is applicable only to term facets.
> There are few Range facet tests which are present and runs repeatedly without 
> any change(facet.method as no effect). Also, splitting would help when we 
> introduce facet.method to range which would be different to term facets



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14348) Split TestJsonFacets to multiple Test Classes

2020-03-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064857#comment-17064857
 ] 

ASF subversion and git services commented on SOLR-14348:


Commit fceecde7e67d4af59c44361e5669e7077e510733 in lucene-solr's branch 
refs/heads/branch_8x from Munendra S N
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fceecde ]

SOLR-14348: split TestJsonFacets to multiple test classes

* TestJsonFacet split into 3 classes, TestsJsonFacets, TestJsonFacetErrors
  and TestJsonRangeFacet
* TestJsonFacets contains mainly terms faceting and stats
* range facet covers distributed cases too


> Split TestJsonFacets to multiple Test Classes
> -
>
> Key: SOLR-14348
> URL: https://issues.apache.org/jira/browse/SOLR-14348
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Attachments: SOLR-14348.patch
>
>
> {{TestJsonFacets}} has parameterized testing. It runs each tests for each 
> facet.method. There are error cases which doesn't actually need it. Also, 
> facet.method is applicable only to term facets.
> There are few Range facet tests which are present and runs repeatedly without 
> any change(facet.method as no effect). Also, splitting would help when we 
> introduce facet.method to range which would be different to term facets



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14343) Properly set initCapacity in NamedList

2020-03-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064858#comment-17064858
 ] 

ASF subversion and git services commented on SOLR-14343:


Commit bc45ce20eac495e333adb890a80b2a9b1445379c in lucene-solr's branch 
refs/heads/branch_8x from Munendra S N
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=bc45ce2 ]

SOLR-14343: set initcapacity properly in NamedList

* This is when map or map.entry array is passed in constructor


> Properly set initCapacity in NamedList
> --
>
> Key: SOLR-14343
> URL: https://issues.apache.org/jira/browse/SOLR-14343
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Minor
> Attachments: SOLR-14343.patch
>
>
> In {{NamedList(Map)}}, list is initialised to map.size() instead of 2 times 
> the size. There are other few instances where initial capacity can be 
> correctly set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14325) Core status could be improved to not require an IndexSearcher

2020-03-23 Thread Richard Goodman (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064852#comment-17064852
 ] 

Richard Goodman edited comment on SOLR-14325 at 3/23/20, 2:47 PM:
--

Hey, 

I did not try the {{getNewestSearcher}} before I left on Thursday, but spent 
today implementing this and trying it, and again had a significantly positive 
impact, _(I won't upload the json dumps this time around)_.

Oddly, I found that there were only delays when it came to adding a replica 
elsewhere and waiting for that to recover, but not when an instance was 
previously down and replicas were recovering, similarly when doing a restore 
and increasing the replication that way.

Anyway, the timeout, the biggest I saw was only 28seconds _(for a core that was 
around 13GB big)_ which in comparison to previous numbers before this, is a big 
enough improvement for us. 

I've updated the patch, this time around just being a simple 1 liner


was (Author: goodman):
Hey, 

I did not try the {{getNewestSearcher}} before I left on Thursday, but spent 
today implementing this and trying it, and again had a significantly positive 
impact, _(I won't upload the json dumps this time around)_.

Oddly, I found that there were only delays when it came to adding a replica 
elsewhere and waiting for that to recover, but not when an instance was 
previously down and replicas were recovering, similarly when doing a restore 
and increasing the replication that way.

Anyway, the timeout, the biggest I saw was only 28seconds which in comparison 
to previous numbers before this, is a big enough improvement for us. 

I've updated the patch, this time around just being a simple 1 liner

> Core status could be improved to not require an IndexSearcher
> -
>
> Key: SOLR-14325
> URL: https://issues.apache.org/jira/browse/SOLR-14325
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-14325.patch, SOLR-14325.patch, SOLR-14325.patch
>
>
> When the core status is told to request "indexInfo", it currently grabs the 
> SolrIndexSearcher but only to grab the Directory.  SolrCore.getIndexSize also 
> only requires the Directory.  By insisting on a SolrIndexSearcher, we 
> potentially block for awhile if the core is in recovery since there is no 
> SolrIndexSearcher.
> [https://lists.apache.org/thread.html/r076218c964e9bd6ed0a53133be9170c3cf36cc874c1b4652120db417%40%3Cdev.lucene.apache.org%3E]
> It'd be nice to have a solution that conditionally used the Directory of the 
> SolrIndexSearcher only if it's present so that we don't waste time creating 
> one either.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14325) Core status could be improved to not require an IndexSearcher

2020-03-23 Thread Richard Goodman (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064852#comment-17064852
 ] 

Richard Goodman commented on SOLR-14325:


Hey, 

I did not try the {{getNewestSearcher}} before I left on Thursday, but spent 
today implementing this and trying it, and again had a significantly positive 
impact, _(I won't upload the json dumps this time around)_.

Oddly, I found that there were only delays when it came to adding a replica 
elsewhere and waiting for that to recover, but not when an instance was 
previously down and replicas were recovering, similarly when doing a restore 
and increasing the replication that way.

Anyway, the timeout, the biggest I saw was only 28seconds which in comparison 
to previous numbers before this, is a big enough improvement for us. 

I've updated the patch, this time around just being a simple 1 liner

> Core status could be improved to not require an IndexSearcher
> -
>
> Key: SOLR-14325
> URL: https://issues.apache.org/jira/browse/SOLR-14325
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-14325.patch, SOLR-14325.patch, SOLR-14325.patch
>
>
> When the core status is told to request "indexInfo", it currently grabs the 
> SolrIndexSearcher but only to grab the Directory.  SolrCore.getIndexSize also 
> only requires the Directory.  By insisting on a SolrIndexSearcher, we 
> potentially block for awhile if the core is in recovery since there is no 
> SolrIndexSearcher.
> [https://lists.apache.org/thread.html/r076218c964e9bd6ed0a53133be9170c3cf36cc874c1b4652120db417%40%3Cdev.lucene.apache.org%3E]
> It'd be nice to have a solution that conditionally used the Directory of the 
> SolrIndexSearcher only if it's present so that we don't waste time creating 
> one either.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14325) Core status could be improved to not require an IndexSearcher

2020-03-23 Thread Richard Goodman (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Goodman updated SOLR-14325:
---
Attachment: SOLR-14325.patch

> Core status could be improved to not require an IndexSearcher
> -
>
> Key: SOLR-14325
> URL: https://issues.apache.org/jira/browse/SOLR-14325
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Major
> Attachments: SOLR-14325.patch, SOLR-14325.patch, SOLR-14325.patch
>
>
> When the core status is told to request "indexInfo", it currently grabs the 
> SolrIndexSearcher but only to grab the Directory.  SolrCore.getIndexSize also 
> only requires the Directory.  By insisting on a SolrIndexSearcher, we 
> potentially block for awhile if the core is in recovery since there is no 
> SolrIndexSearcher.
> [https://lists.apache.org/thread.html/r076218c964e9bd6ed0a53133be9170c3cf36cc874c1b4652120db417%40%3Cdev.lucene.apache.org%3E]
> It'd be nice to have a solution that conditionally used the Directory of the 
> SolrIndexSearcher only if it's present so that we don't waste time creating 
> one either.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14348) Split TestJsonFacets to multiple Test Classes

2020-03-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064849#comment-17064849
 ] 

ASF subversion and git services commented on SOLR-14348:


Commit 06fd70fc0f3d399c4593cdaf9d5d06cd44cd920d in lucene-solr's branch 
refs/heads/master from Munendra S N
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=06fd70f ]

SOLR-14348: split TestJsonFacets to multiple test classes

* TestJsonFacet split into 3 classes, TestsJsonFacets, TestJsonFacetErrors
  and TestJsonRangeFacet
* TestJsonFacets contains mainly terms faceting and stats
* range facet covers distributed cases too


> Split TestJsonFacets to multiple Test Classes
> -
>
> Key: SOLR-14348
> URL: https://issues.apache.org/jira/browse/SOLR-14348
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Major
> Attachments: SOLR-14348.patch
>
>
> {{TestJsonFacets}} has parameterized testing. It runs each tests for each 
> facet.method. There are error cases which doesn't actually need it. Also, 
> facet.method is applicable only to term facets.
> There are few Range facet tests which are present and runs repeatedly without 
> any change(facet.method as no effect). Also, splitting would help when we 
> introduce facet.method to range which would be different to term facets



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14343) Properly set initCapacity in NamedList

2020-03-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064850#comment-17064850
 ] 

ASF subversion and git services commented on SOLR-14343:


Commit 5630619dfd18f43126028f0e12eba06c67970a91 in lucene-solr's branch 
refs/heads/master from Munendra S N
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5630619 ]

SOLR-14343: set initcapacity properly in NamedList

* This is when map or map.entry array is passed in constructor


> Properly set initCapacity in NamedList
> --
>
> Key: SOLR-14343
> URL: https://issues.apache.org/jira/browse/SOLR-14343
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Munendra S N
>Priority: Minor
> Attachments: SOLR-14343.patch
>
>
> In {{NamedList(Map)}}, list is initialised to map.size() instead of 2 times 
> the size. There are other few instances where initial capacity can be 
> correctly set.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13183) NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter

2020-03-23 Thread Christian Hafner-Sprengholz (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064838#comment-17064838
 ] 

Christian Hafner-Sprengholz commented on SOLR-13183:


Thanks very much [~janhoy] for your answers ant the analysis.
The Jetty interception is sth I can't change on my own...

> NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter
> 
>
> Key: SOLR-13183
> URL: https://issues.apache.org/jira/browse/SOLR-13183
> Project: Solr
>  Issue Type: Bug
>Affects Versions: master (9.0)
>Reporter: Cesar Rodriguez
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/schema/%25
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:403)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> [...]
> {noformat}
> Function SolrDisplatchFilter.doFilter(), line 403 calls methods forward() on 
> a null pointer. The problem happens because 
> ServletRequestWrapper.getRequestDispatcher(), line 338 returns null. And that 
> happens because 
> org.eclipse.jetty.server.handler.ContextHandler.Context.getRequestDispatcher()
>  returns a null pointer. This happens because 
> org.eclipse.jetty.http.HttpURI.getDecodedPath() tries to decode the string 
> {{/solr/films/schema/%}}, which is an invalid encoding.
> I don’t fully follow the logic of the code but it seems that the 
> percent-encoding of the URL has first been decoded and then it’s being 
> decoded again?
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].
> h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14359) Admin UI hase "Select an option" for collections and cores drop-downs.

2020-03-23 Thread Erick Erickson (Jira)
Erick Erickson created SOLR-14359:
-

 Summary: Admin UI hase "Select an option" for collections and 
cores drop-downs.
 Key: SOLR-14359
 URL: https://issues.apache.org/jira/browse/SOLR-14359
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI
Affects Versions: master (9.0)
Reporter: Erick Erickson
 Attachments: Screen Shot 2020-03-23 at 10.23.43 AM.png





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9286) FST construction explodes memory in BitTable

2020-03-23 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064832#comment-17064832
 ] 

Dawid Weiss commented on LUCENE-9286:
-

I placed a repro snippet here, including the automaton:
https://github.com/dweiss/lucene9286

Here is an overview of construction/ scan for varying oversizing factors (0-1 
in 0.2 steps).
{code}
[Task]  [Time][%]  [+T₀]
Reading FST  333ms   1.5%0ms
FST construction (of=0.0) 3s 127ms  14.4%  353ms
 @ FST RAM: 56,055,936 bytes
 @ Oversizing factor: 0.00  
TermEnum scan (of=0.0)   345ms   1.6% 3s
FST construction (of=0.2) 1s 997ms   9.2% 3s
 @ FST RAM: 56,055,936 bytes
 @ Oversizing factor: 0.20  
TermEnum scan (of=0.2)   296ms   1.4% 5s
FST construction (of=0.4) 1s 914ms   8.8% 6s
 @ FST RAM: 56,055,936 bytes
 @ Oversizing factor: 0.40  
TermEnum scan (of=0.4)   284ms   1.3% 8s
FST construction (of=0.6) 1s 908ms   8.8% 8s
 @ FST RAM: 56,055,936 bytes
 @ Oversizing factor: 0.60  
TermEnum scan (of=0.6)   269ms   1.2%10s
FST construction (of=0.8)  2s 52ms   9.4%10s
 @ FST RAM: 56,055,056 bytes
 @ Oversizing factor: 0.80  
TermEnum scan (of=0.8)   273ms   1.3%12s
FST construction (of=1.0)   5s  24.4%12s
 @ FST RAM: 54,945,816 bytes
 @ Oversizing factor: 1.00  
TermEnum scan (of=1.0)3s 670ms  16.8%18s
{code}

> FST construction explodes memory in BitTable
> 
>
> Key: LUCENE-9286
> URL: https://issues.apache.org/jira/browse/LUCENE-9286
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
> Attachments: screen-[1].png
>
>
> I see a dramatic increase in the amount of memory required for construction 
> of (arguably large) automata. It currently OOMs with 8GB of memory consumed 
> for bit tables. I am pretty sure this didn't require so much memory before 
> (the automaton is ~50MB after construction).
> Something bad happened in between. Thoughts, [~broustant], [~sokolov]?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9280) Add ability to skip non-competitive documents on field sort

2020-03-23 Thread Mayya Sharipova (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9280?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064825#comment-17064825
 ] 

Mayya Sharipova commented on LUCENE-9280:
-

> Since DocValues are stored in docid order, how will skipping work? 

We have decided to do this similar how `LongDistanceFeatureQuery` works.  The 
initial iterator for a collector is indeed a docValues, but then as we set the 
bottom value in the comparator, the iterator is updated from PointValues to 
include only docs that are lower than bottom value (in case of asc sort). 

This is the most desirable for a case where docs were added sequentially with 
ever increasing values, e.g. a logging use case with an increasing date field. 
For example, if doc1 has a  field1 value 1, doc2 – 2, doc3 – 3, doc100 
– 100, and we needed to retrieve top 3 docs with smallest field1 values,  
currently we would iterate through all docs. With the proposed change, as soon 
as collect first 3 docs and set the bottom 3, the collector's iterator can be 
updated to remove all other docs as they are not competitive. 

 

> Add ability to skip non-competitive documents on field sort 
> 
>
> Key: LUCENE-9280
> URL: https://issues.apache.org/jira/browse/LUCENE-9280
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mayya Sharipova
>Priority: Minor
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Today collectors, once they collect enough docs, can instruct scorers to 
> update their iterators to skip non-competitive documents. This is applicable 
> only for a case when we need top docs by _score.
> It would be nice to also have an ability to skip non-competitive docs when we 
> need top docs sorted by other fields different from _score. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-9221) Lucene Logo Contest

2020-03-23 Thread Ignacio Vera (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9221?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-9221:
-
Attachment: image.png

> Lucene Logo Contest
> ---
>
> Key: LUCENE-9221
> URL: https://issues.apache.org/jira/browse/LUCENE-9221
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Ryan Ernst
>Priority: Trivial
> Attachments: LuceneLogo.png, image.png
>
>
> The Lucene logo has served the project well for almost 20 years. However, it 
> does sometimes show its age and misses modern nice-to-haves like invertable 
> or grayscale variants.
>   
>  The PMC would like to have a contest to replace the current logo. This issue 
> will serve as the submission mechanism for that contest. When the submission 
> deadline closes, a community poll will be used to guide the PMC in the 
> decision of which logo to choose. Keeping the current logo will be a possible 
> outcome of this decision, if a majority likes the current logo more than any 
> other proposal.
>   
>  The logo should adhere to the guidelines set forth by Apache for project 
> logos ([https://www.apache.org/foundation/marks/pmcs#graphics]), specifically 
> that the full project name, "Apache Lucene", must appear in the logo 
> (although the word "Apache" may be in a smaller font than "Lucene").
>   
>  The contest will last approximately one month. The submission deadline is 
> -*Monday, March 16, 2020*- *Monday, April 6, 2020*. Submissions should be 
> attached in a single zip or tar archive, with the filename of the form 
> {{[user]-[proposal number].[extension]}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14300) Some conditional clauses on unindexed field will be ignored by query parser in some specific cases

2020-03-23 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064817#comment-17064817
 ] 

Lucene/Solr QA commented on SOLR-14300:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  2m 25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  2m 25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  2m 26s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 75m  
7s{color} | {color:green} core in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-14300 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12996504/SOLR-14300.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-170-generic #199-Ubuntu SMP 
Thu Nov 14 01:45:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / aaf08c9 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/723/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/723/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Some conditional clauses on unindexed field will be ignored by query parser 
> in some specific cases
> --
>
> Key: SOLR-14300
> URL: https://issues.apache.org/jira/browse/SOLR-14300
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.3, 7.4, 7.5, 7.6, 7.7, 8.0, 8.1, 8.2, 8.3, 8.4
> Environment: Solr 7.3.1 
> centos7.5
>Reporter: Hongtai Xue
>Priority: Minor
>  Labels: newbie, patch
> Fix For: 7.3, 7.4, 7.5, 7.6, 7.7, 8.0, 8.1, 8.2, 8.3, 8.4
>
> Attachments: SOLR-14300.patch
>
>
> In some specific cases, some conditional clauses on unindexed field will be 
> ignored
>  * for query like, q=A:1 OR B:1 OR A:2 OR B:2
>  if field B is not indexed(but docValues="true"), "B:1" will be lost.
>   
>  * but if you write query like, q=A:1 OR A:2 OR B:1 OR B:2,
>  it will work perfect.
> the only difference of two queries is that they are wrote in different orders.
>  one is *ABAB*, another is *AABB.*
>  
> *steps of reproduce*
>  you can easily reproduce this problem on a solr collection with _default 
> configset and exampledocs/books.csv data.
>  # create a _default collection
> {code:java}
> bin/solr create -c books -s 2 -rf 2{code}
>  # post books.csv.
> {code:java}
> bin/post -c books example/exampledocs/books.csv{code}
>  # run followed query.
>  ** query1: 
> [http://localhost:8983/solr/books/select?q=+(name_str:Foundation+OR+cat:book+OR+name_str:Jhereg+OR+cat:cd)=query]
>  ** query2: 
> [http://localhost:8983/solr/books/select?q=+(name_str:Foundation+OR+name_str:Jhereg+OR+cat:book+OR+cat:cd)=query]
>  ** then you can find the parsedqueries are different.
>  *** query1.  ("name_str:Foundation" is lost.)
> {code:json}
>  "debug":{
>      "rawquerystring":"+(name_str:Foundation OR cat:book OR 

[jira] [Commented] (LUCENE-9286) FST construction explodes memory in BitTable

2020-03-23 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064815#comment-17064815
 ] 

Dawid Weiss commented on LUCENE-9286:
-

I confirm my original problem (memory blowup) is related to stored copies of 
arcs. What was previously fairly cheap (copyOf) has become fairly heavy and 
blows up memory when you have data structures that require storing intermediate 
Arcs during processing. 

I also noticed something else that worries me. We do have very specific FSTs 
that are shallow (4-8 levels) but have a very high fan-out on arc labels 
(labels are ints), I don't know if this is related anyhow but when I timed 
automaton construction and traversals I see a significant slowdown.

I created a snippet of code that rebuilds the automaton and does a TermEnum 
enumeration scan with IntsRefFSTEnum; the "Arc transition" entry below is a bit 
more complex code walking the FST.

With the default oversizing factor (1) the results are:

{code}
[Task]  [Time]
[%]  [+T₀]
FST construction7s  
42.3%0ms
 @ FST RAM: [52.40MB allocated, 52.40MB utilized (100.0 %)] 

 @ Oversizing factor: 1.00  

TermEnum scan 4s 260ms  
25.1% 7s
Arc transition  5s  
32.6%11s
{code}

Recompiled with the oversizing factor of 0 the results are:

{code}
[Task]  [Time]
[%]  [+T₀]
FST construction  2s 957ms  
60.1%0ms
 @ FST RAM: [53.46MB allocated, 53.46MB utilized (100.0 %)] 

 @ Oversizing factor: 0.00  

TermEnum scan298ms   
6.1% 2s
Arc transition1s 663ms  
33.8% 3s
{code}

This is fairly consistent across runs. The automaton is consistently faster to 
create and walk if setDirectAddressingMaxOversizingFactor is set to 0. The 
automaton is also not much larger (53.46MB compared to 52.4MB).

I don't know how specific this is to the kind of automata we're building and I 
can't offer much in terms of improving this situation. I can share the 
automaton if you guys would like to take a closer look.

One other lesson from dealing with FST code is that mutable Arc classes make 
everything much more complex and error-prone... I don't know what the 
performance penalty would be for resigning from mutability here but it'd 
definitely help in tracking odd cases like this one.


> FST construction explodes memory in BitTable
> 
>
> Key: LUCENE-9286
> URL: https://issues.apache.org/jira/browse/LUCENE-9286
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
> Attachments: screen-[1].png
>
>
> I see a dramatic increase in the amount of memory required for construction 
> of (arguably large) automata. It currently OOMs with 8GB of memory consumed 
> for bit tables. I am pretty sure this didn't require so much memory before 
> (the automaton is ~50MB after construction).
> Something bad happened in between. Thoughts, [~broustant], [~sokolov]?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] munendrasn commented on a change in pull request #1371: SOLR-14333: print readable version of CollapsedPostFilter query

2020-03-23 Thread GitBox
munendrasn commented on a change in pull request #1371: SOLR-14333: print 
readable version of CollapsedPostFilter query
URL: https://github.com/apache/lucene-solr/pull/1371#discussion_r396465587
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/CollapsingQParserPlugin.java
 ##
 @@ -276,7 +281,19 @@ public int getCost() {
 }
 
 public String toString(String s) {
-  return s;
+  return "!collapse field=" + this.collapseField +
+  ", nullPolicy=" + getNullPolicyString(this.nullPolicy) +
+  ", groupHeadSelector=" + this.groupHeadSelector.toString() +
+  (hint == null ? "": ", hint=" + this.hint) +
+  ", size=" + this.size;
+}
+
+private String getNullPolicyString(int nullPolicy) {
 
 Review comment:
   check if converting them enum makes it cleaner 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] munendrasn commented on a change in pull request #1371: SOLR-14333: print readable version of CollapsedPostFilter query

2020-03-23 Thread GitBox
munendrasn commented on a change in pull request #1371: SOLR-14333: print 
readable version of CollapsedPostFilter query
URL: https://github.com/apache/lucene-solr/pull/1371#discussion_r396465065
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/CollapsingQParserPlugin.java
 ##
 @@ -276,7 +281,19 @@ public int getCost() {
 }
 
 public String toString(String s) {
-  return s;
+  return "!collapse field=" + this.collapseField +
 
 Review comment:
   Maybe add curly braces for beginning and end


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] munendrasn commented on a change in pull request #1371: SOLR-14333: print readable version of CollapsedPostFilter query

2020-03-23 Thread GitBox
munendrasn commented on a change in pull request #1371: SOLR-14333: print 
readable version of CollapsedPostFilter query
URL: https://github.com/apache/lucene-solr/pull/1371#discussion_r396464755
 
 

 ##
 File path: 
solr/core/src/java/org/apache/solr/search/CollapsingQParserPlugin.java
 ##
 @@ -184,6 +184,11 @@ public boolean equals(final Object other) {
 public int hashCode() {
   return 17 * (31 + selectorText.hashCode()) * (31 + type.hashCode());
 }
+
+@Override
+public String toString(){
+  return selectorText;
 
 Review comment:
   shouldn't we include type?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14352) External links from Solr Javadocs to Lucene docs are broken on the master (9.0.0)

2020-03-23 Thread Tomoko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida updated SOLR-14352:
-
Status: Open  (was: Patch Available)

> External links from Solr Javadocs to Lucene docs are broken on the master 
> (9.0.0)
> -
>
> Key: SOLR-14352
> URL: https://issues.apache.org/jira/browse/SOLR-14352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Major
> Attachments: SOLR-14352.patch, SOLR-14352.patch, image.png, 
> javadoc_solr_branch8x.png, javadoc_solr_master.png
>
>
> On branch_8x (w/ java 8), "ant documentation" generates external links from 
> Solr docs to Lucene docs like the capture:
> !javadoc_solr_branch8x.png!
> On the master branch (w/ java11), the links are not created with the same 
> command:
> !javadoc_solr_master.png!
> It looks like the Ant javadoc task does not recognize the {{element-list}} 
> file, that was introduced at (maybe) Java10 as the replacement of 
> {{package-list}}. (See also 
> https://docs.oracle.com/en/java/javase/11/tools/javadoc.html)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14352) External links from Solr Javadocs to Lucene docs are broken on the master (9.0.0)

2020-03-23 Thread Tomoko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida updated SOLR-14352:
-
Status: Patch Available  (was: Open)

> External links from Solr Javadocs to Lucene docs are broken on the master 
> (9.0.0)
> -
>
> Key: SOLR-14352
> URL: https://issues.apache.org/jira/browse/SOLR-14352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Major
> Attachments: SOLR-14352.patch, SOLR-14352.patch, image.png, 
> javadoc_solr_branch8x.png, javadoc_solr_master.png
>
>
> On branch_8x (w/ java 8), "ant documentation" generates external links from 
> Solr docs to Lucene docs like the capture:
> !javadoc_solr_branch8x.png!
> On the master branch (w/ java11), the links are not created with the same 
> command:
> !javadoc_solr_master.png!
> It looks like the Ant javadoc task does not recognize the {{element-list}} 
> file, that was introduced at (maybe) Java10 as the replacement of 
> {{package-list}}. (See also 
> https://docs.oracle.com/en/java/javase/11/tools/javadoc.html)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14334) Semi-reproducing seed for testSmileRequest

2020-03-23 Thread Munendra S N (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N resolved SOLR-14334.
-
Resolution: Fixed

> Semi-reproducing seed for testSmileRequest
> --
>
> Key: SOLR-14334
> URL: https://issues.apache.org/jira/browse/SOLR-14334
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Major
>
> ant test -Dtestcase=TestSmileRequest -Dtests.method=testDistribJsonRequest 
> -Dtests.seed=D8AF63C0745DA4AF -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=en -Dtests.timezone=Australia/South -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>  
> It failed 3 of 4 runs. 2 of the failures were when I was testing the upgrade 
> for ZK and Netty, the other failure was on a fresh, unmodified pull of Solr.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14352) External links from Solr Javadocs to Lucene docs are broken on the master (9.0.0)

2020-03-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-14352:
---
Attachment: image.png

> External links from Solr Javadocs to Lucene docs are broken on the master 
> (9.0.0)
> -
>
> Key: SOLR-14352
> URL: https://issues.apache.org/jira/browse/SOLR-14352
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (9.0)
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Major
> Attachments: SOLR-14352.patch, SOLR-14352.patch, image.png, 
> javadoc_solr_branch8x.png, javadoc_solr_master.png
>
>
> On branch_8x (w/ java 8), "ant documentation" generates external links from 
> Solr docs to Lucene docs like the capture:
> !javadoc_solr_branch8x.png!
> On the master branch (w/ java11), the links are not created with the same 
> command:
> !javadoc_solr_master.png!
> It looks like the Ant javadoc task does not recognize the {{element-list}} 
> file, that was introduced at (maybe) Java10 as the replacement of 
> {{package-list}}. (See also 
> https://docs.oracle.com/en/java/javase/11/tools/javadoc.html)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14334) Semi-reproducing seed for testSmileRequest

2020-03-23 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064803#comment-17064803
 ] 

Munendra S N commented on SOLR-14334:
-

Tried beasting to reproduce it
{code:java}
ant beast -Dtestcase=TestSmileRequest -Dtests.method=testDistribJsonRequest 
-Dbeast.iters=10 -Dtest.iters=10
{code}
This commit 
https://github.com/apache/lucene-solr/commit/4fd96bedc27adff61f3487539adbd67011181b90
https://github.com/apache/lucene-solr/pull/1358#issuecomment-600294636 fixes it

Since there are no test failures in Master build, closing this. we can reopen 
if we reencouter this

> Semi-reproducing seed for testSmileRequest
> --
>
> Key: SOLR-14334
> URL: https://issues.apache.org/jira/browse/SOLR-14334
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Priority: Major
>
> ant test -Dtestcase=TestSmileRequest -Dtests.method=testDistribJsonRequest 
> -Dtests.seed=D8AF63C0745DA4AF -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=en -Dtests.timezone=Australia/South -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>  
> It failed 3 of 4 runs. 2 of the failures were when I was testing the upgrade 
> for ZK and Netty, the other failure was on a fresh, unmodified pull of Solr.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14355) SolrCore Initialization Failures

2020-03-23 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14355?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-14355.
---
Resolution: Incomplete

First, please ask questions first on the user's list. The Jira is for known 
code issues, not a support portal. See: 
http://lucene.apache.org/solr/community.html#mailing-lists-irc

Second, there is very little information here, not nearly enough to even begin 
to diagnose the issue. You might want to review: 
[https://wiki.apache.org/solr/UsingMailingLists]

Third, this is for Solr 5.5, which is no longer under any kind of development 
so it's unlikely to receive any attention.

 

 

> SolrCore Initialization Failures
> 
>
> Key: SOLR-14355
> URL: https://issues.apache.org/jira/browse/SOLR-14355
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 5.5.2
>Reporter: Thiyagarajan Ganesh Rajan
>Priority: Major
>  Labels: solrcoreInitializationFailures
>
> SolrCore Initialization Failures
> opsoffset_shard1_replica1: 
> org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Error opening new searcher



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13183) NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter

2020-03-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064797#comment-17064797
 ] 

Jan Høydahl commented on SOLR-13183:


I managed to reproduce with docker

{code}
run --rm -ti solr:8.4.1 bash
solr start
solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
http://localhost:8983/solr/films/schema
curl http://localhost:8983/solr/films/schema/%25
{code}

This seems to be an interaction between Jetty's servlet handling logic and the 
{{SolrSchemaRestApi}} Restlet API that should have been triggered. Looks like 
Jetty/Servlet intercepts the call before it is handed over to Restlet at all?

> NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter
> 
>
> Key: SOLR-13183
> URL: https://issues.apache.org/jira/browse/SOLR-13183
> Project: Solr
>  Issue Type: Bug
>Affects Versions: master (9.0)
>Reporter: Cesar Rodriguez
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/schema/%25
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:403)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> [...]
> {noformat}
> Function SolrDisplatchFilter.doFilter(), line 403 calls methods forward() on 
> a null pointer. The problem happens because 
> ServletRequestWrapper.getRequestDispatcher(), line 338 returns null. And that 
> happens because 
> org.eclipse.jetty.server.handler.ContextHandler.Context.getRequestDispatcher()
>  returns a null pointer. This happens because 
> org.eclipse.jetty.http.HttpURI.getDecodedPath() tries to decode the string 
> {{/solr/films/schema/%}}, which is an invalid encoding.
> I don’t fully follow the logic of the code but it seems that the 
> percent-encoding of the URL has first been decoded and then it’s being 
> decoded again?
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].
> h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (SOLR-14358) Comment to explain how to use URLClassifyProcessorFactory

2020-03-23 Thread David Eric Pugh (Jira)
David Eric Pugh created SOLR-14358:
--

 Summary: Comment to explain how to use URLClassifyProcessorFactory
 Key: SOLR-14358
 URL: https://issues.apache.org/jira/browse/SOLR-14358
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: UpdateRequestProcessors
Affects Versions: 8.4.1
Reporter: David Eric Pugh


There is an existing Pull Request out there that documents how to use this, 
however needs some formatting love before it can be committed.

https://github.com/apache/lucene-solr/pull/404



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13183) NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter

2020-03-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064792#comment-17064792
 ] 

Jan Høydahl commented on SOLR-13183:


Ah, I saw it now in the "environment" field. We don't use that field for this 
purpose, so I moved the text into issue description.

> NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter
> 
>
> Key: SOLR-13183
> URL: https://issues.apache.org/jira/browse/SOLR-13183
> Project: Solr
>  Issue Type: Bug
>Affects Versions: master (9.0)
>Reporter: Cesar Rodriguez
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/schema/%25
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:403)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> [...]
> {noformat}
> Function SolrDisplatchFilter.doFilter(), line 403 calls methods forward() on 
> a null pointer. The problem happens because 
> ServletRequestWrapper.getRequestDispatcher(), line 338 returns null. And that 
> happens because 
> org.eclipse.jetty.server.handler.ContextHandler.Context.getRequestDispatcher()
>  returns a null pointer. This happens because 
> org.eclipse.jetty.http.HttpURI.getDecodedPath() tries to decode the string 
> {{/solr/films/schema/%}}, which is an invalid encoding.
> I don’t fully follow the logic of the code but it seems that the 
> percent-encoding of the URL has first been decoded and then it’s being 
> decoded again?
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].
> h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13183) NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter

2020-03-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13183:
---
Environment: (was: h1. Steps to reproduce

* Use a Linux machine.
*  Build commit {{ea2c8ba}} of Solr as described in the section below.
* Build the films collection as described below.
* Start the server using the command {{./bin/solr start -f -p 8983 -s 
/tmp/home}}
* Request the URL given in the bug description.

h1. Compiling the server

{noformat}
git clone https://github.com/apache/lucene-solr
cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
{noformat}

h1. Building the collection

We followed [Exercise 
2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
the [Solr Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. 
The attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} 
that you will obtain by following the steps below:

{noformat}
mkdir -p /tmp/home
echo '' > /tmp/home/solr.xml
{noformat}

In one terminal start a Solr instance in foreground:
{noformat}
./bin/solr start -f -p 8983 -s /tmp/home
{noformat}

In another terminal, create a collection of movies, with no shards and no 
replication, and initialize it:

{noformat}
bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
http://localhost:8983/solr/films/schema
./bin/post -c films example/films/films.json
{noformat}
)

> NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter
> 
>
> Key: SOLR-13183
> URL: https://issues.apache.org/jira/browse/SOLR-13183
> Project: Solr
>  Issue Type: Bug
>Affects Versions: master (9.0)
>Reporter: Cesar Rodriguez
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/schema/%25
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:403)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> [...]
> {noformat}
> Function SolrDisplatchFilter.doFilter(), line 403 calls methods forward() on 
> a null pointer. The problem happens because 
> ServletRequestWrapper.getRequestDispatcher(), line 338 returns null. And that 
> happens because 
> org.eclipse.jetty.server.handler.ContextHandler.Context.getRequestDispatcher()
>  returns a null pointer. This happens because 
> org.eclipse.jetty.http.HttpURI.getDecodedPath() tries to decode the string 
> {{/solr/films/schema/%}}, which is an invalid encoding.
> I don’t fully follow the logic of the code but it seems that the 
> percent-encoding of the URL has first been decoded and then it’s being 
> decoded again?
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13183) NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter

2020-03-23 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13183:
---
Description: 
Requesting the following URL causes Solr to return an HTTP 500 error response:

{noformat}
http://localhost:8983/solr/films/schema/%25
{noformat}

The error response seems to be caused by the following uncaught exception:

{noformat}
java.lang.NullPointerException
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:403)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
[...]
{noformat}

Function SolrDisplatchFilter.doFilter(), line 403 calls methods forward() on a 
null pointer. The problem happens because 
ServletRequestWrapper.getRequestDispatcher(), line 338 returns null. And that 
happens because 
org.eclipse.jetty.server.handler.ContextHandler.Context.getRequestDispatcher() 
returns a null pointer. This happens because 
org.eclipse.jetty.http.HttpURI.getDecodedPath() tries to decode the string 
{{/solr/films/schema/%}}, which is an invalid encoding.

I don’t fully follow the logic of the code but it seems that the 
percent-encoding of the URL has first been decoded and then it’s being decoded 
again?

We found this bug using [Diffblue Microservices 
Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
testing 
campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].

h1. Steps to reproduce

* Use a Linux machine.
*  Build commit {{ea2c8ba}} of Solr as described in the section below.
* Build the films collection as described below.
* Start the server using the command {{./bin/solr start -f -p 8983 -s 
/tmp/home}}
* Request the URL given in the bug description.

h1. Compiling the server

{noformat}
git clone https://github.com/apache/lucene-solr
cd lucene-solr
git checkout ea2c8ba
ant compile
cd solr
ant server
{noformat}

h1. Building the collection

We followed [Exercise 
2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
the [Solr Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. 
The attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} 
that you will obtain by following the steps below:

{noformat}
mkdir -p /tmp/home
echo '' > /tmp/home/solr.xml
{noformat}

In one terminal start a Solr instance in foreground:
{noformat}
./bin/solr start -f -p 8983 -s /tmp/home
{noformat}

In another terminal, create a collection of movies, with no shards and no 
replication, and initialize it:

{noformat}
bin/solr create -c films
curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
{"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
http://localhost:8983/solr/films/schema
curl -X POST -H 'Content-type:application/json' --data-binary 
'{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
http://localhost:8983/solr/films/schema
./bin/post -c films example/films/films.json
{noformat}


  was:
Requesting the following URL causes Solr to return an HTTP 500 error response:

{noformat}
http://localhost:8983/solr/films/schema/%25
{noformat}

The error response seems to be caused by the following uncaught exception:

{noformat}
java.lang.NullPointerException
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:403)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
[...]
{noformat}

Function SolrDisplatchFilter.doFilter(), line 403 calls methods forward() on a 
null pointer. The problem happens because 
ServletRequestWrapper.getRequestDispatcher(), line 338 returns null. And that 
happens because 
org.eclipse.jetty.server.handler.ContextHandler.Context.getRequestDispatcher() 
returns a null pointer. This happens because 
org.eclipse.jetty.http.HttpURI.getDecodedPath() tries to decode the string 
{{/solr/films/schema/%}}, which is an invalid encoding.

I don’t fully follow the logic of the code but it seems that the 
percent-encoding of the URL has first been decoded and then it’s being decoded 
again?

We found this bug using [Diffblue 

[jira] [Created] (SOLR-14357) solrj: using insecure namedCurves

2020-03-23 Thread Bernd Wahlen (Jira)
Bernd Wahlen created SOLR-14357:
---

 Summary: solrj: using insecure namedCurves
 Key: SOLR-14357
 URL: https://issues.apache.org/jira/browse/SOLR-14357
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Bernd Wahlen


i tried to run our our backend with solrj 8.4.1 on jdk14 and get the following 
error:
Caused by: java.lang.IllegalArgumentException: Error in security property. 
Constraint unknown: c2tnb191v1
after i removed all the X9.62 algoriths from the property 
jdk.disabled.namedCurves in
/usr/lib/jvm/java-14-openjdk-14.0.0.36-1.rolling.el7.x86_64/conf/security/java.security
everything is running.

This does not happend on staging (i think because of only 1 solr node - not 
using lb client).
We do not set or change any ssl settings in solr.in.sh.
I don't know how to fix that (default config?, apache client settings?), but i 
think using insecure algorithms may be  a security risk and not only a jdk14 
issue.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on a change in pull request #1364: SOLR-14335: Lock Solr's memory to prevent swapping

2020-03-23 Thread GitBox
janhoy commented on a change in pull request #1364: SOLR-14335: Lock Solr's 
memory to prevent swapping
URL: https://github.com/apache/lucene-solr/pull/1364#discussion_r396399944
 
 

 ##
 File path: solr/bootstrap/build.gradle
 ##
 @@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+apply plugin: 'java-library'
+
+jar {
+  manifest {
+attributes(
+'Main-Class': 'org.apache.solr.bootstrap.SolrBootstrap',
+'Class-Path': '. lib/start.jar jna.jar'
 
 Review comment:
   If you get it working, you may in fact push a change to the PR branch


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss commented on a change in pull request #1364: SOLR-14335: Lock Solr's memory to prevent swapping

2020-03-23 Thread GitBox
dweiss commented on a change in pull request #1364: SOLR-14335: Lock Solr's 
memory to prevent swapping
URL: https://github.com/apache/lucene-solr/pull/1364#discussion_r396389388
 
 

 ##
 File path: solr/bootstrap/build.gradle
 ##
 @@ -0,0 +1,35 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+
+apply plugin: 'java-library'
+
+jar {
+  manifest {
+attributes(
+'Main-Class': 'org.apache.solr.bootstrap.SolrBootstrap',
+'Class-Path': '. lib/start.jar jna.jar'
 
 Review comment:
   This is a chance to clean up manual jar naming, actually. The classpath here 
should be something like this:
   "Class-Path": configurations.runtimeClasspath.collect { 
"lib/${it.getName()}" }.join(' '))
   then bootstrap would have accurate coordinates at the exact library versions 
it needs to launch. Plus -- the "rename" trickery wouldn't be needed anymore in 
server.gradle.
   
   The name of the output artifact (bootstrap.jar) can be set in this project 
as well (so that it doesn't have a version in the name). 
   jar.archiveName = "boostrap.jar"
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13183) NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter

2020-03-23 Thread Christian Hafner-Sprengholz (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064719#comment-17064719
 ] 

Christian Hafner-Sprengholz edited comment on SOLR-13183 at 3/23/20, 11:18 AM:
---

The original report of this includes a step by step procedure to reproduce the 
situation.
In our environment we use SolrJ. We do not have exclusive control over all the 
data used in the environment. The %-character is used as part of a synonym.


was (Author: haspre):
The original report of this includes a step by step procedure to reproduce the 
situation.
In our environment, we use SolrJ. We do not have exclusive control over all the 
data used in the environment. The %-character is used as part of a synonym.

> NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter
> 
>
> Key: SOLR-13183
> URL: https://issues.apache.org/jira/browse/SOLR-13183
> Project: Solr
>  Issue Type: Bug
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Cesar Rodriguez
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/schema/%25
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:403)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> [...]
> {noformat}
> Function SolrDisplatchFilter.doFilter(), line 403 calls methods forward() on 
> a null pointer. The problem happens because 
> ServletRequestWrapper.getRequestDispatcher(), line 338 returns null. And that 
> happens because 
> org.eclipse.jetty.server.handler.ContextHandler.Context.getRequestDispatcher()
>  returns a null pointer. This happens because 
> org.eclipse.jetty.http.HttpURI.getDecodedPath() tries to decode the string 
> {{/solr/films/schema/%}}, which is an invalid encoding.
> I don’t fully follow the logic of the code but it seems that the 
> percent-encoding of the URL has first been decoded and then it’s being 
> decoded again?
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Commented] (SOLR-13183) NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter

2020-03-23 Thread Christian Hafner-Sprengholz (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064719#comment-17064719
 ] 

Christian Hafner-Sprengholz commented on SOLR-13183:


The original report of this includes a step by step procedure to reproduce the 
situation.
In our environment, we use SolrJ. We do not have exclusive control over all the 
data used in the environment. The %-character is used as part of a synonym.

> NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter
> 
>
> Key: SOLR-13183
> URL: https://issues.apache.org/jira/browse/SOLR-13183
> Project: Solr
>  Issue Type: Bug
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Cesar Rodriguez
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/schema/%25
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:403)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> [...]
> {noformat}
> Function SolrDisplatchFilter.doFilter(), line 403 calls methods forward() on 
> a null pointer. The problem happens because 
> ServletRequestWrapper.getRequestDispatcher(), line 338 returns null. And that 
> happens because 
> org.eclipse.jetty.server.handler.ContextHandler.Context.getRequestDispatcher()
>  returns a null pointer. This happens because 
> org.eclipse.jetty.http.HttpURI.getDecodedPath() tries to decode the string 
> {{/solr/films/schema/%}}, which is an invalid encoding.
> I don’t fully follow the logic of the code but it seems that the 
> percent-encoding of the URL has first been decoded and then it’s being 
> decoded again?
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9286) FST construction explodes memory in BitTable

2020-03-23 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064712#comment-17064712
 ] 

Dawid Weiss commented on LUCENE-9286:
-

Apologies for the delay. I'm still looking into this and trying to figure out 
whether it's the automaton construction or something secondary in my own code. 

We do FST traversals and cache arcs along the way: when doing so we do 
arc.copyOf to freeze the state of an arc we have to return to (to follow the 
target while still enumerating the remaining children). arc.copyOf can be 
heavier now as it clones the underlying bitTable. Perhaps this could be just a 
reference copy for read-only FSTs... this copying seems fairly heavy.

Still not sure whether this is the core of the problem. I'll get back to you.

> FST construction explodes memory in BitTable
> 
>
> Key: LUCENE-9286
> URL: https://issues.apache.org/jira/browse/LUCENE-9286
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 8.5
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
> Attachments: screen-[1].png
>
>
> I see a dramatic increase in the amount of memory required for construction 
> of (arguably large) automata. It currently OOMs with 8GB of memory consumed 
> for bit tables. I am pretty sure this didn't require so much memory before 
> (the automaton is ~50MB after construction).
> Something bad happened in between. Thoughts, [~broustant], [~sokolov]?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data

2020-03-23 Thread GitBox
noblepaul commented on a change in pull request #1327: SOLR-13942: 
/api/cluster/zk/* to fetch raw ZK data
URL: https://github.com/apache/lucene-solr/pull/1327#discussion_r396368718
 
 

 ##
 File path: solr/core/src/java/org/apache/solr/handler/admin/ZkRead.java
 ##
 @@ -0,0 +1,117 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.handler.admin;
+
+import java.util.Collections;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+
+import org.apache.solr.api.Command;
+import org.apache.solr.api.EndPoint;
+import org.apache.solr.client.solrj.SolrRequest;
+import org.apache.solr.client.solrj.impl.BinaryResponseParser;
+import org.apache.solr.common.MapWriter;
+import org.apache.solr.common.params.CommonParams;
+import org.apache.solr.common.params.MapSolrParams;
+import org.apache.solr.common.params.SolrParams;
+import org.apache.solr.common.util.ContentStreamBase;
+import org.apache.solr.common.util.Utils;
+import org.apache.solr.core.CoreContainer;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.SolrQueryResponse;
+import org.apache.zookeeper.data.Stat;
+
+import static org.apache.solr.common.params.CommonParams.OMIT_HEADER;
+import static org.apache.solr.common.params.CommonParams.WT;
+import static org.apache.solr.response.RawResponseWriter.CONTENT;
+import static 
org.apache.solr.security.PermissionNameProvider.Name.COLL_READ_PERM;
+
+/**Exposes the content of the Zookeeper
+ * This is an expert feature that exposes the data inside the back end 
zookeeper.This API may change or
+ * be removed in future versions.
+ * This is not a public API. The data that is returned is not guaranteed to 
remain same
+ * across releases, as the data stored in Zookeeper may change from time to 
time.
+ */
+@EndPoint(path = "/cluster/zk/*",
+method = SolrRequest.METHOD.GET,
+permission = COLL_READ_PERM)
+public class ZkRead {
+  private final CoreContainer coreContainer;
+
+  public ZkRead(CoreContainer coreContainer) {
+this.coreContainer = coreContainer;
+  }
+
+  @Command
+  public void get(SolrQueryRequest req, SolrQueryResponse rsp) {
+String path = req.getPathTemplateValues().get("*");
+if (path == null || path.isEmpty()) path = "/";
+byte[] d = null;
+try {
+  List l = 
coreContainer.getZkController().getZkClient().getChildren(path, null, false);
+  if (l != null && !l.isEmpty()) {
+String prefix = path.endsWith("/") ? path : path + "/";
+
+rsp.add(path, (MapWriter) ew -> {
+  for (String s : l) {
+try {
+  Stat stat = 
coreContainer.getZkController().getZkClient().exists(prefix + s, null, false);
+  ew.put(s, (MapWriter) ew1 -> {
+ew1.put("version", stat.getVersion());
+ew1.put("aversion", stat.getAversion());
+ew1.put("children", stat.getNumChildren());
+ew1.put("ctime", stat.getCtime());
+ew1.put("cversion", stat.getCversion());
+ew1.put("czxid", stat.getCzxid());
+ew1.put("ephemeralOwner", stat.getEphemeralOwner());
+ew1.put("mtime", stat.getMtime());
+ew1.put("mzxid", stat.getMzxid());
+ew1.put("pzxid", stat.getPzxid());
+ew1.put("dataLength", stat.getDataLength());
+  });
+} catch (Exception e) {
+  ew.put("s", Collections.singletonMap("error", e.getMessage()));
+}
+  }
+});
+
+  } else {
+d = coreContainer.getZkController().getZkClient().getData(path, null, 
null, false);
+if (d == null || d.length == 0) {
+  rsp.add(path, null);
+  return;
+}
+
+Map map = new HashMap<>(1);
+map.put(WT, "raw");
+map.put(OMIT_HEADER, "true");
+req.setParams(SolrParams.wrapDefaults(new MapSolrParams(map), 
req.getParams()));
+
+
+rsp.add(CONTENT, new ContentStreamBase.ByteArrayStream(d, null,
+d[0] == '{' ? CommonParams.JSON_MIME : 
BinaryResponseParser.BINARY_CONTENT_TYPE));
+
+  }
+
+} catch (Exception e) {
 
 Review comment:
   this is 

[GitHub] [lucene-solr] uschindler commented on a change in pull request #1364: SOLR-14335: Lock Solr's memory to prevent swapping

2020-03-23 Thread GitBox
uschindler commented on a change in pull request #1364: SOLR-14335: Lock Solr's 
memory to prevent swapping
URL: https://github.com/apache/lucene-solr/pull/1364#discussion_r396331814
 
 

 ##
 File path: solr/bootstrap/src/java/org/apache/solr/bootstrap/NativeLibrary.java
 ##
 @@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.bootstrap;
+
+import com.sun.jna.LastErrorException;
+import org.eclipse.jetty.start.StartLog;
+
+import static org.apache.solr.bootstrap.NativeLibrary.OSType.*;
+
+public final class NativeLibrary {
+  public enum OSType {
+LINUX,
+MAC,
+WINDOWS,
+AIX,
+OTHER;
+  }
+
+  public static final OSType osType;
+
+  private static final int MCL_CURRENT;
+
+  private static final int ENOMEM = 12;
+
+  private static final NativeLibraryWrapper wrappedLibrary;
+  private static boolean jnaLockable = false;
+
+  static {
+// detect the OS type the JVM is running on and then set the 
CLibraryWrapper
+// instance to a compatable implementation of CLibraryWrapper for that OS 
type
+osType = getOsType();
+switch (osType) {
+  case MAC:
+wrappedLibrary = new NativeLibraryDarwin();
+break;
+  case WINDOWS:
+wrappedLibrary = new NativeLibraryWindows();
+break;
+  case LINUX:
+  case AIX:
+  case OTHER:
+  default:
+wrappedLibrary = new NativeLibraryLinux();
+}
+
+if (System.getProperty("os.arch").toLowerCase().contains("ppc")) {
+  if (osType == LINUX) {
+MCL_CURRENT = 0x2000;
+  } else if (osType == AIX) {
+MCL_CURRENT = 0x100;
+  } else {
+MCL_CURRENT = 1;
+  }
+} else {
+  MCL_CURRENT = 1;
+}
+  }
+
+  private NativeLibrary() {
+  }
+
+  /**
+   * @return the detected OSType of the Operating System running the JVM using 
crude string matching
+   */
+  private static OSType getOsType() {
 
 Review comment:
   Ah right!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on a change in pull request #1364: SOLR-14335: Lock Solr's memory to prevent swapping

2020-03-23 Thread GitBox
janhoy commented on a change in pull request #1364: SOLR-14335: Lock Solr's 
memory to prevent swapping
URL: https://github.com/apache/lucene-solr/pull/1364#discussion_r396327177
 
 

 ##
 File path: solr/bootstrap/src/java/org/apache/solr/bootstrap/NativeLibrary.java
 ##
 @@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.bootstrap;
+
+import com.sun.jna.LastErrorException;
+import org.eclipse.jetty.start.StartLog;
+
+import static org.apache.solr.bootstrap.NativeLibrary.OSType.*;
+
+public final class NativeLibrary {
+  public enum OSType {
+LINUX,
+MAC,
+WINDOWS,
+AIX,
+OTHER;
+  }
+
+  public static final OSType osType;
+
+  private static final int MCL_CURRENT;
+
+  private static final int ENOMEM = 12;
+
+  private static final NativeLibraryWrapper wrappedLibrary;
+  private static boolean jnaLockable = false;
+
+  static {
+// detect the OS type the JVM is running on and then set the 
CLibraryWrapper
+// instance to a compatable implementation of CLibraryWrapper for that OS 
type
+osType = getOsType();
+switch (osType) {
+  case MAC:
+wrappedLibrary = new NativeLibraryDarwin();
+break;
+  case WINDOWS:
+wrappedLibrary = new NativeLibraryWindows();
+break;
+  case LINUX:
+  case AIX:
+  case OTHER:
+  default:
+wrappedLibrary = new NativeLibraryLinux();
+}
+
+if (System.getProperty("os.arch").toLowerCase().contains("ppc")) {
+  if (osType == LINUX) {
+MCL_CURRENT = 0x2000;
+  } else if (osType == AIX) {
+MCL_CURRENT = 0x100;
+  } else {
+MCL_CURRENT = 1;
+  }
+} else {
+  MCL_CURRENT = 1;
+}
+  }
+
+  private NativeLibrary() {
+  }
+
+  /**
+   * @return the detected OSType of the Operating System running the JVM using 
crude string matching
+   */
+  private static OSType getOsType() {
 
 Review comment:
   This module runs before Lucene's classpath has been established, so does not 
have access to any of that. That's why we cannot even log with slf4j.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy commented on a change in pull request #1364: SOLR-14335: Lock Solr's memory to prevent swapping

2020-03-23 Thread GitBox
janhoy commented on a change in pull request #1364: SOLR-14335: Lock Solr's 
memory to prevent swapping
URL: https://github.com/apache/lucene-solr/pull/1364#discussion_r396327377
 
 

 ##
 File path: solr/bootstrap/src/java/org/apache/solr/bootstrap/SolrBootstrap.java
 ##
 @@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.bootstrap;
+
+import org.eclipse.jetty.start.StartLog;
+
+/**
+ * Main class that will delegate to Jetty's Main class after doing some 
bootstrap actions.
+ * Everything that needs to be done before the Jetty application starts can go 
here.
+ */
+public class SolrBootstrap {
+  static {
+System.setProperty("jna.tmpdir", System.getProperty("solr.solr.home"));
+  }
+
+  public SolrBootstrap() {
+StartLog.info("Starting Solr...");
+  }
+
+  public static void main(String[] args) {
+SolrBootstrap solrBootstrap = new SolrBootstrap();
+solrBootstrap.memLockMaybe();
+org.eclipse.jetty.start.Main.main(args);
+  }
+
+  private void memLockMaybe() {
+if (Boolean.getBoolean("solr.memory.lock")) {
+  if (NativeLibrary.isAvailable()) {
+StartLog.info("Attempting to lock Solr's memory to prevent 
swapping...");
+NativeLibrary.tryMlockall();
+  } else {
+StartLog.debug("JNA not available, cannot lock memory");
 
 Review comment:
   Agree


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] uschindler commented on a change in pull request #1364: SOLR-14335: Lock Solr's memory to prevent swapping

2020-03-23 Thread GitBox
uschindler commented on a change in pull request #1364: SOLR-14335: Lock Solr's 
memory to prevent swapping
URL: https://github.com/apache/lucene-solr/pull/1364#discussion_r396321964
 
 

 ##
 File path: solr/bootstrap/src/java/org/apache/solr/bootstrap/SolrBootstrap.java
 ##
 @@ -0,0 +1,51 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.bootstrap;
+
+import org.eclipse.jetty.start.StartLog;
+
+/**
+ * Main class that will delegate to Jetty's Main class after doing some 
bootstrap actions.
+ * Everything that needs to be done before the Jetty application starts can go 
here.
+ */
+public class SolrBootstrap {
+  static {
+System.setProperty("jna.tmpdir", System.getProperty("solr.solr.home"));
+  }
+
+  public SolrBootstrap() {
+StartLog.info("Starting Solr...");
+  }
+
+  public static void main(String[] args) {
+SolrBootstrap solrBootstrap = new SolrBootstrap();
+solrBootstrap.memLockMaybe();
+org.eclipse.jetty.start.Main.main(args);
+  }
+
+  private void memLockMaybe() {
+if (Boolean.getBoolean("solr.memory.lock")) {
+  if (NativeLibrary.isAvailable()) {
+StartLog.info("Attempting to lock Solr's memory to prevent 
swapping...");
+NativeLibrary.tryMlockall();
+  } else {
+StartLog.debug("JNA not available, cannot lock memory");
 
 Review comment:
   This should also be "warn" has it is a bad idea to not tell the user that it 
does not work at all. If user wants to get rid of warning, he should disable 
the solr.memory.lock sysprop (resp. env var) from startfile.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] uschindler commented on a change in pull request #1364: SOLR-14335: Lock Solr's memory to prevent swapping

2020-03-23 Thread GitBox
uschindler commented on a change in pull request #1364: SOLR-14335: Lock Solr's 
memory to prevent swapping
URL: https://github.com/apache/lucene-solr/pull/1364#discussion_r396320238
 
 

 ##
 File path: solr/bootstrap/src/java/org/apache/solr/bootstrap/NativeLibrary.java
 ##
 @@ -0,0 +1,139 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.solr.bootstrap;
+
+import com.sun.jna.LastErrorException;
+import org.eclipse.jetty.start.StartLog;
+
+import static org.apache.solr.bootstrap.NativeLibrary.OSType.*;
+
+public final class NativeLibrary {
+  public enum OSType {
+LINUX,
+MAC,
+WINDOWS,
+AIX,
+OTHER;
+  }
+
+  public static final OSType osType;
+
+  private static final int MCL_CURRENT;
+
+  private static final int ENOMEM = 12;
+
+  private static final NativeLibraryWrapper wrappedLibrary;
+  private static boolean jnaLockable = false;
+
+  static {
+// detect the OS type the JVM is running on and then set the 
CLibraryWrapper
+// instance to a compatable implementation of CLibraryWrapper for that OS 
type
+osType = getOsType();
+switch (osType) {
+  case MAC:
+wrappedLibrary = new NativeLibraryDarwin();
+break;
+  case WINDOWS:
+wrappedLibrary = new NativeLibraryWindows();
+break;
+  case LINUX:
+  case AIX:
+  case OTHER:
+  default:
+wrappedLibrary = new NativeLibraryLinux();
+}
+
+if (System.getProperty("os.arch").toLowerCase().contains("ppc")) {
+  if (osType == LINUX) {
+MCL_CURRENT = 0x2000;
+  } else if (osType == AIX) {
+MCL_CURRENT = 0x100;
+  } else {
+MCL_CURRENT = 1;
+  }
+} else {
+  MCL_CURRENT = 1;
+}
+  }
+
+  private NativeLibrary() {
+  }
+
+  /**
+   * @return the detected OSType of the Operating System running the JVM using 
crude string matching
+   */
+  private static OSType getOsType() {
 
 Review comment:
   we already detect the OS type using `oal.util.Constants`. We should not 
duplicate code here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13183) NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter

2020-03-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064666#comment-17064666
 ] 

Jan Høydahl commented on SOLR-13183:


Can you give a step by step procedure for reproducing this from a clean 
download (preferably of 8.4 version)?
What are you trying to do, and why do you need the %25 in there?
Is this only happening with the schema REST API or other places too?

> NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter
> 
>
> Key: SOLR-13183
> URL: https://issues.apache.org/jira/browse/SOLR-13183
> Project: Solr
>  Issue Type: Bug
>Affects Versions: master (9.0)
> Environment: h1. Steps to reproduce
> * Use a Linux machine.
> *  Build commit {{ea2c8ba}} of Solr as described in the section below.
> * Build the films collection as described below.
> * Start the server using the command {{./bin/solr start -f -p 8983 -s 
> /tmp/home}}
> * Request the URL given in the bug description.
> h1. Compiling the server
> {noformat}
> git clone https://github.com/apache/lucene-solr
> cd lucene-solr
> git checkout ea2c8ba
> ant compile
> cd solr
> ant server
> {noformat}
> h1. Building the collection
> We followed [Exercise 
> 2|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html#exercise-2] from 
> the [Solr 
> Tutorial|http://lucene.apache.org/solr/guide/7_5/solr-tutorial.html]. The 
> attached file ({{home.zip}}) gives the contents of folder {{/tmp/home}} that 
> you will obtain by following the steps below:
> {noformat}
> mkdir -p /tmp/home
> echo '' > 
> /tmp/home/solr.xml
> {noformat}
> In one terminal start a Solr instance in foreground:
> {noformat}
> ./bin/solr start -f -p 8983 -s /tmp/home
> {noformat}
> In another terminal, create a collection of movies, with no shards and no 
> replication, and initialize it:
> {noformat}
> bin/solr create -c films
> curl -X POST -H 'Content-type:application/json' --data-binary '{"add-field": 
> {"name":"name", "type":"text_general", "multiValued":false, "stored":true}}' 
> http://localhost:8983/solr/films/schema
> curl -X POST -H 'Content-type:application/json' --data-binary 
> '{"add-copy-field" : {"source":"*","dest":"_text_"}}' 
> http://localhost:8983/solr/films/schema
> ./bin/post -c films example/films/films.json
> {noformat}
>Reporter: Cesar Rodriguez
>Priority: Minor
>  Labels: diffblue, newdev
> Attachments: home.zip
>
>
> Requesting the following URL causes Solr to return an HTTP 500 error response:
> {noformat}
> http://localhost:8983/solr/films/schema/%25
> {noformat}
> The error response seems to be caused by the following uncaught exception:
> {noformat}
> java.lang.NullPointerException
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:403)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> [...]
> {noformat}
> Function SolrDisplatchFilter.doFilter(), line 403 calls methods forward() on 
> a null pointer. The problem happens because 
> ServletRequestWrapper.getRequestDispatcher(), line 338 returns null. And that 
> happens because 
> org.eclipse.jetty.server.handler.ContextHandler.Context.getRequestDispatcher()
>  returns a null pointer. This happens because 
> org.eclipse.jetty.http.HttpURI.getDecodedPath() tries to decode the string 
> {{/solr/films/schema/%}}, which is an invalid encoding.
> I don’t fully follow the logic of the code but it seems that the 
> percent-encoding of the URL has first been decoded and then it’s being 
> decoded again?
> We found this bug using [Diffblue Microservices 
> Testing|https://www.diffblue.com/labs/]. Find more information on this [fuzz 
> testing 
> campaign|https://www.diffblue.com/blog/2018/12/19/diffblue-microservice-testing-a-sneak-peek-at-our-early-product-and-results].



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13183) NullPointerException at o.a.solr.servlet.SolrDispatchFilter.doFilter

2020-03-23 Thread Christian Hafner-Sprengholz (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064655#comment-17064655
 ] 

Christian Hafner-Sprengholz commented on SOLR-13183:


We are experiencing the same behaviour as described using 7.7.2 (we are bound 
to this version).

Has someone a workaround for this?

{noformat}
schema/analysis/synonyms/cs/100%25+dzus
{noformat}


{code:java}
java.lang.NullPointerException: null
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:404)
 ~[?:?]
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
 ~[?:?]
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
 ~[jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
~[jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146) 
~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) 
~[jetty-security-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
 ~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
 ~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
 ~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
 ~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
 ~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480) 
~[jetty-servlet-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
 ~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
 ~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
 ~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) 
~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
 ~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
 ~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
 ~[jetty-rewrite-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.server.Server.handle(Server.java:502) 
~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364) 
~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260) 
~[jetty-server-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
 ~[jetty-io-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) 
~[jetty-io-9.4.14.v20181114.jar:9.4.14.v20181114]
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118) 
~[jetty-io-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
 ~[jetty-util-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
 ~[jetty-util-9.4.14.v20181114.jar:9.4.14.v20181114]
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
 ~[jetty-util-9.4.14.v20181114.jar:9.4.14.v20181114]
at 

[jira] [Commented] (SOLR-14335) Lock Solr's memory to prevent swapping

2020-03-23 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064647#comment-17064647
 ] 

Jan Høydahl commented on SOLR-14335:


Updated PR. 
Should now work on Windows too.
Moved start.jar into server/lib/ to make sure that is the only entry point we 
support
Added JNA lib and added code for mlock from Cassandra
Enable with env.var `SOLR_MEMORY_LOCK=true` - possible to set in solr.in.sh

Some issues with locking on macOS. Have not tested on Linux yet, but it should 
work?
More feedback welcome

> Lock Solr's memory to prevent swapping
> --
>
> Key: SOLR-14335
> URL: https://issues.apache.org/jira/browse/SOLR-14335
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Jan Høydahl
>Priority: Major
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Followup from SOLR-10306. While sometimes you are able to disable or reduce 
> swap on the host, other times that is not so easy. Having a native option to 
> lock Solr's memory would be beneficial.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul commented on a change in pull request #1327: SOLR-13942: /api/cluster/zk/* to fetch raw ZK data

2020-03-23 Thread GitBox
noblepaul commented on a change in pull request #1327: SOLR-13942: 
/api/cluster/zk/* to fetch raw ZK data
URL: https://github.com/apache/lucene-solr/pull/1327#discussion_r396288196
 
 

 ##
 File path: 
solr/core/src/test/org/apache/solr/handler/admin/ZookeeperReadTest.java
 ##
 @@ -0,0 +1,100 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.handler.admin;
+
+import java.lang.invoke.MethodHandles;
+import java.net.URL;
+import java.util.Map;
+
+import org.apache.solr.client.solrj.impl.HttpSolrClient;
+import org.apache.solr.cloud.SolrCloudTestCase;
+import org.apache.solr.common.util.Utils;
+import org.apache.zookeeper.CreateMode;
+import org.junit.After;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import static org.apache.solr.common.util.StrUtils.split;
+import static org.apache.solr.common.util.Utils.getObjectByPath;
+
+public class ZookeeperReadTest extends SolrCloudTestCase {
+  private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+  @BeforeClass
+  public static void setupCluster() throws Exception {
+configureCluster(1)
+.addConfig("conf", configset("cloud-minimal"))
+.configure();
+  }
+
+  @Before
+  @Override
+  public void setUp() throws Exception {
+super.setUp();
+  }
+
+  @After
+  @Override
+  public void tearDown() throws Exception {
+super.tearDown();
+  }
+
+  @Test
+  public void testZkread() throws Exception {
+URL baseUrl = cluster.getJettySolrRunner(0).getBaseUrl();
+String basezk = baseUrl.toString().replace("/solr", "/api") + 
"/cluster/zk";
+
+try (HttpSolrClient client = new 
HttpSolrClient.Builder(baseUrl.toString()).build()) {
+  Object o = Utils.executeGET(client.getHttpClient(),
+  basezk + "/security.json",
+  Utils.JSONCONSUMER);
+  assertNotNull(o);
+  o = Utils.executeGET(client.getHttpClient(),
+  basezk + "/configs",
+  Utils.JSONCONSUMER);
+  assertEquals("0", String.valueOf(getObjectByPath(o, true, 
split(":/configs:_default:dataLength", ':';
+  assertEquals("0", String.valueOf(getObjectByPath(o, true, 
split(":/configs:conf:dataLength", ':';
+
+  o = Utils.executeGET(client.getHttpClient(),
+  basezk + "/configs?leaf=true",
+  Utils.JSONCONSUMER);
+  assertTrue(((Map)o).containsKey("/configs"));
 
 Review comment:
   what are the warnings?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-8274) Add per-request MDC logging based on user-provided value.

2020-03-23 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-8274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064618#comment-17064618
 ] 

Cao Manh Dat commented on SOLR-8274:


[~dsmiley] I think it can, right now it encodes basic data like trace-id, time 
into Http Headers so we can use that for passing anything else we want. 
But should we leverage for this case or not, i'm not sure.

> Add per-request MDC logging based on user-provided value.
> -
>
> Key: SOLR-8274
> URL: https://issues.apache.org/jira/browse/SOLR-8274
> Project: Solr
>  Issue Type: Improvement
>  Components: logging
>Reporter: Jason Gerlowski
>Priority: Minor
> Attachments: SOLR-8274.patch
>
>
> *Problem 1* Currently, there's no way (AFAIK) to find all log messages 
> associated with a particular request.
> *Problem 2* There's also no easy way for multi-tenant Solr setups to find all 
> log messages associated with a particular customer/tenant.
> Both of these problems would be more manageable if Solr could be configured 
> to record an MDC tag based on a header, or some other user provided value.
> This would allow admins to group together logs about a single request.  If 
> the same header value is repeated multiple times this functionality could 
> also be used to group together arbitrary requests, such as those that come 
> from a particular user, etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14356) PeerSync with hanging nodes

2020-03-23 Thread Cao Manh Dat (Jira)
Cao Manh Dat created SOLR-14356:
---

 Summary: PeerSync with hanging nodes
 Key: SOLR-14356
 URL: https://issues.apache.org/jira/browse/SOLR-14356
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Cao Manh Dat


Right now in {{PeerSync}} (during leader election), in case of exception on 
requesting versions to a node, we will skip that node if exception is one the 
following type
* ConnectTimeoutException
* NoHttpResponseException
* SocketException
Sometime the other node basically hang but still accept connection. In that 
case SocketTimeoutException is thrown and we consider the {{PeerSync}} process 
as failed and the whole shard just basically leaderless forever (as long as the 
hang node still there).

We can't just blindly adding {{SocketTimeoutException}} to above list, since 
[~shalin] mentioned that sometimes timeout can happen because of genuine 
reasons too e.g. temporary GC pause.
I think the general idea here is we obey {{leaderVoteWait}} restriction and 
retry doing sync with others.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14356) PeerSync with hanging nodes

2020-03-23 Thread Cao Manh Dat (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14356?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-14356:

Description: 
Right now in {{PeerSync}} (during leader election), in case of exception on 
requesting versions to a node, we will skip that node if exception is one the 
following type
* ConnectTimeoutException
* NoHttpResponseException
* SocketException
Sometime the other node basically hang but still accept connection. In that 
case SocketTimeoutException is thrown and we consider the {{PeerSync}} process 
as failed and the whole shard just basically leaderless forever (as long as the 
hang node still there).

We can't just blindly adding {{SocketTimeoutException}} to above list, since 
[~shalin] mentioned that sometimes timeout can happen because of genuine 
reasons too e.g. temporary GC pause.
I think the general idea here is we obey {{leaderVoteWait}} restriction and 
retry doing sync with others in case of connection/timeout exception happen.


  was:
Right now in {{PeerSync}} (during leader election), in case of exception on 
requesting versions to a node, we will skip that node if exception is one the 
following type
* ConnectTimeoutException
* NoHttpResponseException
* SocketException
Sometime the other node basically hang but still accept connection. In that 
case SocketTimeoutException is thrown and we consider the {{PeerSync}} process 
as failed and the whole shard just basically leaderless forever (as long as the 
hang node still there).

We can't just blindly adding {{SocketTimeoutException}} to above list, since 
[~shalin] mentioned that sometimes timeout can happen because of genuine 
reasons too e.g. temporary GC pause.
I think the general idea here is we obey {{leaderVoteWait}} restriction and 
retry doing sync with others.



> PeerSync with hanging nodes
> ---
>
> Key: SOLR-14356
> URL: https://issues.apache.org/jira/browse/SOLR-14356
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Priority: Major
>
> Right now in {{PeerSync}} (during leader election), in case of exception on 
> requesting versions to a node, we will skip that node if exception is one the 
> following type
> * ConnectTimeoutException
> * NoHttpResponseException
> * SocketException
> Sometime the other node basically hang but still accept connection. In that 
> case SocketTimeoutException is thrown and we consider the {{PeerSync}} 
> process as failed and the whole shard just basically leaderless forever (as 
> long as the hang node still there).
> We can't just blindly adding {{SocketTimeoutException}} to above list, since 
> [~shalin] mentioned that sometimes timeout can happen because of genuine 
> reasons too e.g. temporary GC pause.
> I think the general idea here is we obey {{leaderVoteWait}} restriction and 
> retry doing sync with others in case of connection/timeout exception happen.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14355) SolrCore Initialization Failures

2020-03-23 Thread Thiyagarajan Ganesh Rajan (Jira)
Thiyagarajan Ganesh Rajan created SOLR-14355:


 Summary: SolrCore Initialization Failures
 Key: SOLR-14355
 URL: https://issues.apache.org/jira/browse/SOLR-14355
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 5.5.2
Reporter: Thiyagarajan Ganesh Rajan


SolrCore Initialization Failures

opsoffset_shard1_replica1: 
org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
Error opening new searcher



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14300) Some conditional clauses on unindexed field will be ignored by query parser in some specific cases

2020-03-23 Thread Hongtai Xue (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongtai Xue updated SOLR-14300:
---
Status: Patch Available  (was: Open)

> Some conditional clauses on unindexed field will be ignored by query parser 
> in some specific cases
> --
>
> Key: SOLR-14300
> URL: https://issues.apache.org/jira/browse/SOLR-14300
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.3, 7.4, 7.5, 7.6, 7.7, 8.0, 8.1, 8.2, 8.3, 8.4
> Environment: Solr 7.3.1 
> centos7.5
>Reporter: Hongtai Xue
>Priority: Minor
>  Labels: newbie, patch
> Fix For: 7.3, 7.4, 7.5, 7.6, 7.7, 8.0, 8.1, 8.2, 8.3, 8.4
>
> Attachments: SOLR-14300.patch
>
>
> In some specific cases, some conditional clauses on unindexed field will be 
> ignored
>  * for query like, q=A:1 OR B:1 OR A:2 OR B:2
>  if field B is not indexed(but docValues="true"), "B:1" will be lost.
>   
>  * but if you write query like, q=A:1 OR A:2 OR B:1 OR B:2,
>  it will work perfect.
> the only difference of two queries is that they are wrote in different orders.
>  one is *ABAB*, another is *AABB.*
>  
> *steps of reproduce*
>  you can easily reproduce this problem on a solr collection with _default 
> configset and exampledocs/books.csv data.
>  # create a _default collection
> {code:java}
> bin/solr create -c books -s 2 -rf 2{code}
>  # post books.csv.
> {code:java}
> bin/post -c books example/exampledocs/books.csv{code}
>  # run followed query.
>  ** query1: 
> [http://localhost:8983/solr/books/select?q=+(name_str:Foundation+OR+cat:book+OR+name_str:Jhereg+OR+cat:cd)=query]
>  ** query2: 
> [http://localhost:8983/solr/books/select?q=+(name_str:Foundation+OR+name_str:Jhereg+OR+cat:book+OR+cat:cd)=query]
>  ** then you can find the parsedqueries are different.
>  *** query1.  ("name_str:Foundation" is lost.)
> {code:json}
>  "debug":{
>      "rawquerystring":"+(name_str:Foundation OR cat:book OR name_str:Jhereg 
> OR cat:cd)",
>      "querystring":"+(name_str:Foundation OR cat:book OR name_str:Jhereg OR 
> cat:cd)",
>      "parsedquery":"+(cat:book cat:cd (name_str:[[4a 68 65 72 65 67] TO [4a 
> 68 65 72 65 67]]))",
>      "parsedquery_toString":"+(cat:book cat:cd name_str:[[4a 68 65 72 65 67] 
> TO [4a 68 65 72 65 67]])",
>      "QParser":"LuceneQParser"}}{code}
>  *** query2.  ("name_str:Foundation" isn't lost.)
> {code:json}
>    "debug":{
>      "rawquerystring":"+(name_str:Foundation OR name_str:Jhereg OR cat:book 
> OR cat:cd)",
>      "querystring":"+(name_str:Foundation OR name_str:Jhereg OR cat:book OR 
> cat:cd)",
>      "parsedquery":"+(cat:book cat:cd ((name_str:[[46 6f 75 6e 64 61 74 69 6f 
> 6e] TO [46 6f 75 6e 64 61 74 69 6f 6e]]) (name_str:[[4a 68 65 72 65 67] TO 
> [4a 68 65 72 65 67]])))",
>      "parsedquery_toString":"+(cat:book cat:cd (name_str:[[46 6f 75 6e 64 61 
> 74 69 6f 6e] TO [46 6f 75 6e 64 61 74 69 6f 6e]] name_str:[[4a 68 65 72 65 
> 67] TO [4a 68 65 72 65 67]]))",
>      "QParser":"LuceneQParser"}{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9275) TestLatLonMultiPolygonShapeQueries failure

2020-03-23 Thread Ignacio Vera (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera resolved LUCENE-9275.
--
Fix Version/s: 8.6
 Assignee: Ignacio Vera
   Resolution: Fixed

> TestLatLonMultiPolygonShapeQueries failure
> --
>
> Key: LUCENE-9275
> URL: https://issues.apache.org/jira/browse/LUCENE-9275
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Ignacio Vera
>Assignee: Ignacio Vera
>Priority: Major
> Fix For: 8.6
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This test can fail for big circle queries when it goes over the pole.  
> {code}
> Error Message:
> wrong hit (first of possibly more):  FAIL: id=128 should match but did not   
> relation=CONTAINS   query=LatLonShapeQuery: 
> field=shape:[CIRCLE([73.45044631686574,-43.522442537891635] radius = 
> 1320857.7583952076 meters),] docID=127   shape=[[-43.60599318072272, 
> -95.89632190395075] [1.401298464324817E-45, -95.89632190395075] 
> [1.401298464324817E-45, 148.0564038690461] [-43.60599318072272, 
> -95.89632190395075] , [-8.713707222781277, -137.43977030462523] 
> [-8.665986874636296, -136.83720024522643] [-8.605159056677273, 
> -135.67900228425023] [-9.022985319342514, -135.7748381870073] 
> [-9.57551836995, -135.03944293912676] [-10.486875163146422, 
> -133.75932451570236] [-12.667313123772418, -133.7153234402556] 
> [-15.400299607273027, -133.5089745815] [-17.28330603483186, 
> -134.4554641982157] [-21.607368456646313, -136.29612908889345] 
> [-20.932241412751615, -139.63293025024942] [-20.650194586536255, 
> -141.13774572688035] [-19.001635084539416, -144.5606838562986] 
> [-15.72417778804206, -146.161554433355] [-15.56323460342411, 
> -147.13460257950626] [-11.61552273270253, -144.82632867223] 
> [-8.302765767406079, -143.5037337366715] [-9.07099844105521, 
> -140.49240322673248] [-7.525403752869964, -140.08470342809397] 
> [-8.713707222781277, -137.43977030462523] , [0.999403953552, 
> -157.66023552014605] [90.0, -157.66023552014605] [90.0, 
> 1.401298464324817E-45] [0.999403953552, 1.401298464324817E-45] 
> [0.999403953552, -157.66023552014605] , [78.40177762548313, 
> 0.999403953552] [90.0, 0.999403953552] [90.0, 107.68304478215401] 
> [78.40177762548313, 0.999403953552] ]   deleted?=false  
> distanceQuery=CIRCLE([73.45044631686574,-43.522442537891635] radius = 
> 1320857.7583952076 meters)
> {code}
> reproduce with: 
> {code}ant test  -Dtestcase=TestLatLonMultiPolygonShapeQueries 
> -Dtests.method=testRandomMedium -Dtests.seed=B76D55AB11A1D02A 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=vi 
> -Dtests.timezone=Etc/GMT-3 -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9275) TestLatLonMultiPolygonShapeQueries failure

2020-03-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064568#comment-17064568
 ] 

ASF subversion and git services commented on LUCENE-9275:
-

Commit c63d2813fdbc9fbb64648edfb41fd1e6a7be0070 in lucene-solr's branch 
refs/heads/branch_8x from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c63d281 ]

LUCENE-9275: make TestLatLonMultiPolygonShapeQueries more resilient for 
CONTAINS queries (#1345)



> TestLatLonMultiPolygonShapeQueries failure
> --
>
> Key: LUCENE-9275
> URL: https://issues.apache.org/jira/browse/LUCENE-9275
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Ignacio Vera
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This test can fail for big circle queries when it goes over the pole.  
> {code}
> Error Message:
> wrong hit (first of possibly more):  FAIL: id=128 should match but did not   
> relation=CONTAINS   query=LatLonShapeQuery: 
> field=shape:[CIRCLE([73.45044631686574,-43.522442537891635] radius = 
> 1320857.7583952076 meters),] docID=127   shape=[[-43.60599318072272, 
> -95.89632190395075] [1.401298464324817E-45, -95.89632190395075] 
> [1.401298464324817E-45, 148.0564038690461] [-43.60599318072272, 
> -95.89632190395075] , [-8.713707222781277, -137.43977030462523] 
> [-8.665986874636296, -136.83720024522643] [-8.605159056677273, 
> -135.67900228425023] [-9.022985319342514, -135.7748381870073] 
> [-9.57551836995, -135.03944293912676] [-10.486875163146422, 
> -133.75932451570236] [-12.667313123772418, -133.7153234402556] 
> [-15.400299607273027, -133.5089745815] [-17.28330603483186, 
> -134.4554641982157] [-21.607368456646313, -136.29612908889345] 
> [-20.932241412751615, -139.63293025024942] [-20.650194586536255, 
> -141.13774572688035] [-19.001635084539416, -144.5606838562986] 
> [-15.72417778804206, -146.161554433355] [-15.56323460342411, 
> -147.13460257950626] [-11.61552273270253, -144.82632867223] 
> [-8.302765767406079, -143.5037337366715] [-9.07099844105521, 
> -140.49240322673248] [-7.525403752869964, -140.08470342809397] 
> [-8.713707222781277, -137.43977030462523] , [0.999403953552, 
> -157.66023552014605] [90.0, -157.66023552014605] [90.0, 
> 1.401298464324817E-45] [0.999403953552, 1.401298464324817E-45] 
> [0.999403953552, -157.66023552014605] , [78.40177762548313, 
> 0.999403953552] [90.0, 0.999403953552] [90.0, 107.68304478215401] 
> [78.40177762548313, 0.999403953552] ]   deleted?=false  
> distanceQuery=CIRCLE([73.45044631686574,-43.522442537891635] radius = 
> 1320857.7583952076 meters)
> {code}
> reproduce with: 
> {code}ant test  -Dtestcase=TestLatLonMultiPolygonShapeQueries 
> -Dtests.method=testRandomMedium -Dtests.seed=B76D55AB11A1D02A 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=vi 
> -Dtests.timezone=Etc/GMT-3 -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9275) TestLatLonMultiPolygonShapeQueries failure

2020-03-23 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17064567#comment-17064567
 ] 

ASF subversion and git services commented on LUCENE-9275:
-

Commit aaf08c9c4d9ce58511e9821fdcf574b6e3540d4b in lucene-solr's branch 
refs/heads/master from Ignacio Vera
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=aaf08c9 ]

LUCENE-9275: make TestLatLonMultiPolygonShapeQueries more resilient for 
CONTAINS queries (#1345)



> TestLatLonMultiPolygonShapeQueries failure
> --
>
> Key: LUCENE-9275
> URL: https://issues.apache.org/jira/browse/LUCENE-9275
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Ignacio Vera
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This test can fail for big circle queries when it goes over the pole.  
> {code}
> Error Message:
> wrong hit (first of possibly more):  FAIL: id=128 should match but did not   
> relation=CONTAINS   query=LatLonShapeQuery: 
> field=shape:[CIRCLE([73.45044631686574,-43.522442537891635] radius = 
> 1320857.7583952076 meters),] docID=127   shape=[[-43.60599318072272, 
> -95.89632190395075] [1.401298464324817E-45, -95.89632190395075] 
> [1.401298464324817E-45, 148.0564038690461] [-43.60599318072272, 
> -95.89632190395075] , [-8.713707222781277, -137.43977030462523] 
> [-8.665986874636296, -136.83720024522643] [-8.605159056677273, 
> -135.67900228425023] [-9.022985319342514, -135.7748381870073] 
> [-9.57551836995, -135.03944293912676] [-10.486875163146422, 
> -133.75932451570236] [-12.667313123772418, -133.7153234402556] 
> [-15.400299607273027, -133.5089745815] [-17.28330603483186, 
> -134.4554641982157] [-21.607368456646313, -136.29612908889345] 
> [-20.932241412751615, -139.63293025024942] [-20.650194586536255, 
> -141.13774572688035] [-19.001635084539416, -144.5606838562986] 
> [-15.72417778804206, -146.161554433355] [-15.56323460342411, 
> -147.13460257950626] [-11.61552273270253, -144.82632867223] 
> [-8.302765767406079, -143.5037337366715] [-9.07099844105521, 
> -140.49240322673248] [-7.525403752869964, -140.08470342809397] 
> [-8.713707222781277, -137.43977030462523] , [0.999403953552, 
> -157.66023552014605] [90.0, -157.66023552014605] [90.0, 
> 1.401298464324817E-45] [0.999403953552, 1.401298464324817E-45] 
> [0.999403953552, -157.66023552014605] , [78.40177762548313, 
> 0.999403953552] [90.0, 0.999403953552] [90.0, 107.68304478215401] 
> [78.40177762548313, 0.999403953552] ]   deleted?=false  
> distanceQuery=CIRCLE([73.45044631686574,-43.522442537891635] radius = 
> 1320857.7583952076 meters)
> {code}
> reproduce with: 
> {code}ant test  -Dtestcase=TestLatLonMultiPolygonShapeQueries 
> -Dtests.method=testRandomMedium -Dtests.seed=B76D55AB11A1D02A 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=vi 
> -Dtests.timezone=Etc/GMT-3 -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] iverase merged pull request #1345: LUCENE-9275: make TestLatLonMultiPolygonShapeQueries more resilient for CONTAINS queries

2020-03-23 Thread GitBox
iverase merged pull request #1345: LUCENE-9275: make 
TestLatLonMultiPolygonShapeQueries more resilient for CONTAINS queries
URL: https://github.com/apache/lucene-solr/pull/1345
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org