[GitHub] [lucene-solr] murblanc commented on a change in pull request #1504: SOLR-14462: cache more than one autoscaling session

2020-05-20 Thread GitBox


murblanc commented on a change in pull request #1504:
URL: https://github.com/apache/lucene-solr/pull/1504#discussion_r428457444



##
File path: 
solr/solrj/src/test/org/apache/solr/client/solrj/cloud/autoscaling/TestPolicy.java
##
@@ -1709,13 +1708,134 @@ public DistribStateManager getDistribStateManager() {
 assertEquals(2, s1.getRefCount());
 
 s2[0].release();
-assertFalse(sessionRef.getSessionWrapper() == 
PolicyHelper.SessionWrapper.DEFAULT_INSTANCE);
+assertFalse(sessionRef.isEmpty());
 s1.release();
-assertTrue(sessionRef.getSessionWrapper() == 
PolicyHelper.SessionWrapper.DEFAULT_INSTANCE);
+assertTrue(sessionRef.isEmpty());
 
 
   }
 
+  @Test

Review comment:
   Yes. Not sure it's worth it. We'll see what others think.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #1504: SOLR-14462: cache more than one autoscaling session

2020-05-20 Thread GitBox


murblanc commented on a change in pull request #1504:
URL: https://github.com/apache/lucene-solr/pull/1504#discussion_r428457206



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/PolicyHelper.java
##
@@ -382,45 +383,78 @@ static MapWriter loggingInfo(Policy policy, 
SolrCloudManager cloudManager, Sugge
   }
 
   public enum Status {
-NULL,
-//it is just created and not yet used or all operations on it has been 
completed fully
-UNUSED,
-COMPUTING, EXECUTING
+/**
+ * A command is actively using and modifying the session to compute 
placements
+ */
+COMPUTING,
+/**
+ * A command is not done yet processing its changes but no longer updates 
or even uses the session
+ */
+EXECUTING
   }
 
   /**
-   * This class stores a session for sharing purpose. If a process creates a 
session to
-   * compute operations,
-   * 1) see if there is a session that is available in the cache,
-   * 2) if yes, check if it is expired
-   * 3) if it is expired, create a new session
-   * 4) if it is not expired, borrow it
-   * 5) after computing operations put it back in the cache
+   * This class stores sessions for sharing purposes. If a process requires a 
session to
+   * compute operations:
+   * 
+   * see if there is an available non expired session in the cache,
+   * if yes, borrow it.
+   * if no, create a new one and borrow it.
+   * after computing (update) operations are done, {@link 
#returnSession(SessionWrapper)} back to the cache so it's
+   * again available for borrowing.
+   * after all borrowers are done computing then executing with the 
session, {@link #release(SessionWrapper)} it,
+   * which removes it from the cache.
+   * 
*/
   static class SessionRef {
+/**
+ * Lock protecting access to {@link #sessionWrapperSet} and to {@link 
#creationsInProgress}
+ */
 private final Object lockObj = new Object();
-private SessionWrapper sessionWrapper = SessionWrapper.DEFAULT_INSTANCE;
 
+/**
+ * Sessions currently in use in {@link Status#COMPUTING} or {@link 
Status#EXECUTING} states. As soon as all
+ * uses of a session are over, that session is removed from this set. 
Sessions not actively in use are NOT kept around.
+ *
+ * Access should only be done under the protection of {@link 
#lockObj}
+ */
+private Set sessionWrapperSet = 
Collections.newSetFromMap(new IdentityHashMap<>());
+
+
+/**
+ * Number of sessions currently being created but not yeet present in 
{@link #sessionWrapperSet}.

Review comment:
   Think different





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #1504: SOLR-14462: cache more than one autoscaling session

2020-05-20 Thread GitBox


murblanc commented on a change in pull request #1504:
URL: https://github.com/apache/lucene-solr/pull/1504#discussion_r428457066



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/PolicyHelper.java
##
@@ -382,45 +383,78 @@ static MapWriter loggingInfo(Policy policy, 
SolrCloudManager cloudManager, Sugge
   }
 
   public enum Status {
-NULL,
-//it is just created and not yet used or all operations on it has been 
completed fully
-UNUSED,
-COMPUTING, EXECUTING
+/**
+ * A command is actively using and modifying the session to compute 
placements
+ */
+COMPUTING,
+/**
+ * A command is not done yet processing its changes but no longer updates 
or even uses the session
+ */
+EXECUTING
   }
 
   /**
-   * This class stores a session for sharing purpose. If a process creates a 
session to
-   * compute operations,
-   * 1) see if there is a session that is available in the cache,
-   * 2) if yes, check if it is expired
-   * 3) if it is expired, create a new session
-   * 4) if it is not expired, borrow it
-   * 5) after computing operations put it back in the cache
+   * This class stores sessions for sharing purposes. If a process requires a 
session to
+   * compute operations:
+   * 
+   * see if there is an available non expired session in the cache,
+   * if yes, borrow it.
+   * if no, create a new one and borrow it.
+   * after computing (update) operations are done, {@link 
#returnSession(SessionWrapper)} back to the cache so it's
+   * again available for borrowing.
+   * after all borrowers are done computing then executing with the 
session, {@link #release(SessionWrapper)} it,
+   * which removes it from the cache.
+   * 
*/
   static class SessionRef {
+/**
+ * Lock protecting access to {@link #sessionWrapperSet} and to {@link 
#creationsInProgress}
+ */
 private final Object lockObj = new Object();
-private SessionWrapper sessionWrapper = SessionWrapper.DEFAULT_INSTANCE;
 
+/**
+ * Sessions currently in use in {@link Status#COMPUTING} or {@link 
Status#EXECUTING} states. As soon as all
+ * uses of a session are over, that session is removed from this set. 
Sessions not actively in use are NOT kept around.
+ *
+ * Access should only be done under the protection of {@link 
#lockObj}
+ */
+private Set sessionWrapperSet = 
Collections.newSetFromMap(new IdentityHashMap<>());
+
+
+/**
+ * Number of sessions currently being created but not yeet present in 
{@link #sessionWrapperSet}.
+ *
+ * Access should only be done under the protection of {@link 
#lockObj}
+ */
+private int creationsInProgress = 0;
 
 public SessionRef() {
 }
 
-
-//only for debugging
-SessionWrapper getSessionWrapper() {
-  return sessionWrapper;
+// used only by tests
+boolean isEmpty() {
+  synchronized (lockObj) {
+return sessionWrapperSet.isEmpty();
+  }
 }
 
 /**
  * All operations suggested by the current session object
  * is complete. Do not even cache anything
  */
 private void release(SessionWrapper sessionWrapper) {
+  boolean present;
   synchronized (lockObj) {
-if (sessionWrapper.createTime == this.sessionWrapper.createTime && 
this.sessionWrapper.refCount.get() <= 0) {
-  log.debug("session set to NULL");
-  this.sessionWrapper = SessionWrapper.DEFAULT_INSTANCE;
-} // else somebody created a new session b/c of expiry . So no need to 
do anything about it
+present = sessionWrapperSet.remove(sessionWrapper);
+  }
+  if (!present) {
+log.warn("released session {} not found in session set", 
sessionWrapper.getCreateTime());
+  } else {
+  TimeSource timeSource = 
sessionWrapper.session.cloudManager.getTimeSource();

Review comment:
   Look ok to me





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13749) Implement support for joining across collections with multiple shards ( XCJF )

2020-05-20 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112814#comment-17112814
 ] 

David Smiley commented on SOLR-13749:
-

Thanks Dan! And also for your patience. I'll do some review here to ensure it 
gets visibility:
 * Instead of {{method=ccjoin}}, lets do {{method=crossCollection}}. Having 
"join" in there is redundant given the context. And no reason to be 
ultra-concise.
 * Lets make this also work for the default. In 
{{org.apache.solr.search.JoinQParserPlugin#parse}} which detects explicit vs 
default method, you could modify that if the default index method fails, make 
an attempt with crossCollection. Also, maybe tweak the exception of the 
existing fromIndex check failure to mention the new method (or not; your 
preference).
 * The whitelist for solrUrl makes sense, but I think we also need one for 
zkHost since otherwise one cluster could steal data from another cluster; no?
 * Can't routerField be an (optional) query _parameter_ instead of demanding 
pre-configuration?
 * Can you please remove CrossCollectionJoinQParserPlugin or explain why it 
should stay?

BTW, You may want to check out SOLR-14470 which will enable the {{unique}} 
expression to be "pushed down" to Solr's {{/export}} so that you're not sending 
redundant keys.

> Implement support for joining across collections with multiple shards ( XCJF )
> --
>
> Key: SOLR-13749
> URL: https://issues.apache.org/jira/browse/SOLR-13749
> Project: Solr
>  Issue Type: New Feature
>Reporter: Kevin Watters
>Assignee: Gus Heck
>Priority: Blocker
> Fix For: 8.6
>
> Attachments: 2020-03 Smiley with ASF hat.jpeg
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> This ticket includes 2 query parsers.
> The first one is the "Cross collection join filter"  (XCJF) parser. This is 
> the "Cross-collection join filter" query parser. It can do a call out to a 
> remote collection to get a set of join keys to be used as a filter against 
> the local collection.
> The second one is the Hash Range query parser that you can specify a field 
> name and a hash range, the result is that only the documents that would have 
> hashed to that range will be returned.
> This query parser will do an intersection based on join keys between 2 
> collections.
> The local collection is the collection that you are searching against.
> The remote collection is the collection that contains the join keys that you 
> want to use as a filter.
> Each shard participating in the distributed request will execute a query 
> against the remote collection.  If the local collection is setup with the 
> compositeId router to be routed on the join key field, a hash range query is 
> applied to the remote collection query to only match the documents that 
> contain a potential match for the documents that are in the local shard/core. 
>  
>  
> Here's some vocab to help with the descriptions of the various parameters.
> ||Term||Description||
> |Local Collection|This is the main collection that is being queried.|
> |Remote Collection|This is the collection that the XCJFQuery will query to 
> resolve the join keys.|
> |XCJFQuery|The lucene query that executes a search to get back a set of join 
> keys from a remote collection|
> |HashRangeQuery|The lucene query that matches only the documents whose hash 
> code on a field falls within a specified range.|
>  
>  
> ||Param ||Required ||Description||
> |collection|Required|The name of the external Solr collection to be queried 
> to retrieve the set of join key values ( required )|
> |zkHost|Optional|The connection string to be used to connect to Zookeeper.  
> zkHost and solrUrl are both optional parameters, and at most one of them 
> should be specified.  
> If neither of zkHost or solrUrl are specified, the local Zookeeper cluster 
> will be used. ( optional )|
> |solrUrl|Optional|The URL of the external Solr node to be queried ( optional 
> )|
> |from|Required|The join key field name in the external collection ( required 
> )|
> |to|Required|The join key field name in the local collection|
> |v|See Note|The query to be executed against the external Solr collection to 
> retrieve the set of join key values.  
> Note:  The original query can be passed at the end of the string or as the 
> "v" parameter.  
> It's recommended to use query parameter substitution with the "v" parameter 
> to ensure no issues arise with the default query parsers.|
> |routed| |true / false.  If true, the XCJF query will use each shard's hash 
> range to determine the set of join keys to retrieve for that shard.
> This parameter improves the performance of the cross-collection join, but 
> it depends on the local collection being routed by the toField.  If this 

[GitHub] [lucene-solr] murblanc commented on a change in pull request #1504: SOLR-14462: cache more than one autoscaling session

2020-05-20 Thread GitBox


murblanc commented on a change in pull request #1504:
URL: https://github.com/apache/lucene-solr/pull/1504#discussion_r428455296



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/PolicyHelper.java
##
@@ -429,87 +463,149 @@ private void release(SessionWrapper sessionWrapper) {
  * The session can be used by others while the caller is performing 
operations
  */
 private void returnSession(SessionWrapper sessionWrapper) {
-  TimeSource timeSource = sessionWrapper.session != null ? 
sessionWrapper.session.cloudManager.getTimeSource() : TimeSource.NANO_TIME;
+  boolean present;
   synchronized (lockObj) {
 sessionWrapper.status = Status.EXECUTING;
-if (log.isDebugEnabled()) {
-  log.debug("returnSession, curr-time {} sessionWrapper.createTime {}, 
this.sessionWrapper.createTime {} "
-  , time(timeSource, MILLISECONDS),
-  sessionWrapper.createTime,
-  this.sessionWrapper.createTime);
-}
-if (sessionWrapper.createTime == this.sessionWrapper.createTime) {
-  //this session was used for computing new operations and this can 
now be used for other
-  // computing
-  this.sessionWrapper = sessionWrapper;
+present = sessionWrapperSet.contains(sessionWrapper);
 
-  //one thread who is waiting for this need to be notified.
-  lockObj.notify();
-} else {
-  log.debug("create time NOT SAME {} ", 
SessionWrapper.DEFAULT_INSTANCE.createTime);
-  //else just ignore it
-}
+// wake up single thread waiting for a session return (ok if not woken 
up, wait is short)
+// Important to wake up a single one, otherwise of multiple waiting 
threads, all but one will immediately create new sessions
+lockObj.notify();
   }
 
+  // Logging
+  if (present) {
+if (log.isDebugEnabled()) {
+  log.debug("returnSession {}", sessionWrapper.getCreateTime());
+}
+  } else {
+log.warn("returning unknown session {} ", 
sessionWrapper.getCreateTime());
+  }
 }
 
-
-public SessionWrapper get(SolrCloudManager cloudManager) throws 
IOException, InterruptedException {
+/**
+ * Method returning an available session that can be used for {@link 
Status#COMPUTING}, either from the
+ * {@link #sessionWrapperSet} cache or by creating a new one. The status 
of the returned session is set to {@link Status#COMPUTING}.
+ *
+ * Some waiting is done in two cases:
+ * 
+ *   A candidate session is present in {@link #sessionWrapperSet} but 
is still {@link Status#COMPUTING}, a random wait
+ *   is observed to see if the session gets freed to save a session 
creation and allow session reuse,
+ *   It is necessary to create a new session but there are already 
sessions in the process of being created, a
+ *   random wait is observed (if no waiting already occurred waiting for a 
session to become free) before creation
+ *   takes place, just in case one of the created sessions got used then 
{@link #returnSession(SessionWrapper)} in the meantime.
+ * 
+ *
+ * The random wait prevents the "thundering herd" effect when all threads 
needing a session at the same time create a new
+ * one even though some differentiated waits could have led to better 
reuse and less session creations.
+ *
+ * @param allowWait usually true except in tests that know 
there's no point in waiting because nothing
+ *  will happen...
+ */
+public SessionWrapper get(SolrCloudManager cloudManager, boolean 
allowWait) throws IOException, InterruptedException {
   TimeSource timeSource = cloudManager.getTimeSource();
+  long oldestUpdateTimeNs = 
TimeUnit.SECONDS.convert(timeSource.getTimeNs(), TimeUnit.NANOSECONDS) - 
SESSION_EXPIRY;
+  int zkVersion = 
cloudManager.getDistribStateManager().getAutoScalingConfig().getZkVersion();
+
   synchronized (lockObj) {
-if (sessionWrapper.status == Status.NULL ||
-sessionWrapper.zkVersion != 
cloudManager.getDistribStateManager().getAutoScalingConfig().getZkVersion() ||
-TimeUnit.SECONDS.convert(timeSource.getTimeNs() - 
sessionWrapper.lastUpdateTime, TimeUnit.NANOSECONDS) > SESSION_EXPIRY) {
-  //no session available or the session is expired
-  return createSession(cloudManager);
-} else {
+SessionWrapper sw = getAvailableSession(zkVersion, oldestUpdateTimeNs);
+
+// Best case scenario: an available session
+if (sw != null) {
+  if (log.isDebugEnabled()) {
+log.debug("reusing session {}", sw.getCreateTime());
+  }
+  return sw;
+}
+
+// Wait for a while before deciding what to do if waiting could help...
+if ((creationsInProgress != 0 || hasCandidateSession(zkVersion, 
oldestUpdateTimeNs)) && 

[jira] [Comment Edited] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112720#comment-17112720
 ] 

Cao Manh Dat edited comment on SOLR-14419 at 5/21/20, 2:12 AM:
---

{quote}
Query DSL objects need to go into dedicated {{queries}} property see SOLR-12490:
{quote}
If that is the case, will it confusing? It will be simpler for user to assume 
that, this part (inside main query or filters)
{code}
{"param": "paramName"} 
{code}
will be translated to
{code}
paramValue // can be a string, a list, a json object picked from params
{code}

{quote}
 I recently get to solving this puzzle it's really tricky. I can share how to 
if you wish to see.
{quote}
Yes, this makes me curious.




was (Author: caomanhdat):
{quote}
Query DSL objects need to go into dedicated {{queries}} property see SOLR-12490:
{quote}
If that is the case, will it confusing? It will be simpler for user to assume 
that, this part
{code}
{"param": "paramName"} 
{code}
will be translated to
{code}
paramValue // can be a string, a list, a json object.
{code}

{quote}
 I recently get to solving this puzzle it's really tricky. I can share how to 
if you wish to see.
{quote}
Yes, this makes me curious.



> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112720#comment-17112720
 ] 

Cao Manh Dat commented on SOLR-14419:
-

{quote}
Query DSL objects need to go into dedicated {{queries}} property see SOLR-12490:
{quote}
If that is the case, will it confusing? It will be simpler for user to assume 
that, this part
{code}
{"param": "paramName"} 
{code}
will be translated to
{code}
paramValue // can be a string, a list, a json object.
{code}

{quote}
 I recently get to solving this puzzle it's really tricky. I can share how to 
if you wish to see.
{quote}
Yes, this makes me curious.



> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14467) inconsistent server errors combining relatedness() with allBuckets:true

2020-05-20 Thread Chris M. Hostetter (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112695#comment-17112695
 ] 

Chris M. Hostetter commented on SOLR-14467:
---

{quote}... unsurprisingly, most (all?) seem to ignore slotContext ...
{quote}
Correct. slotContext was introduced when the {{relatedness()}} stat was added. 
Prior to that, all "stats" were simple aggregations of document field values – 
they cared only about the individual documents, not the "set" of documents that 
make up the bucket.

{{relatedness()}} introduced the need for knowing "context" about the current 
bucket/slot for the purpose of computing statistical results that couldn't be 
aggregated from the individual document values.

The {{SlotContext}} API was added as a "hook" to capture the information 
currently needed by {{relatedness()}} (the {{Query}}) and to provide an 
extention point for the future if/when any additional info that might be needed 
by other "advanced" stats about the context of the bucket. The {{collect()}} 
API was extended to take in a {{IntFunction}} (as opposed to just 
{{SlotContext}}) so that there would be no overhead for "simple" aggregations 
that only care about the individual documents and their field values – ex: 
computing "sum(price)" under a "terms" facet on the "category" doesn't result 
in building a {{SlotContext}} (object or a {{TermQuery}}) for evey facet 
bucket, because the "sum" aggretaion never invokes the {{IntFunction}}.
{quote}I'm curious to know what you make of the approach in the attached patch.
{quote}
My Initial impressions:
 * i don't like the idea of adding yet another collect method signature to 
specify the "accumulation" slot independently of the "collection" slot ... i 
think that if that info really is useful it should just be added as a new 
method callback in the {{SlotContext}}
 * I _really_ don't like adding {{instanceof}} checks to {{RelatednessAgg}} 
that make hardocded assumptions about what impls will/won't ever exist or 
support allBuckets
 ** reading {{allBucketsSlot}} once and assuming it will never change is also a 
brittle assumption that defies the design of the Resizer API – having just 
fixed SOLR-14492, I'm pretty sure this will break with DVHASH.
 ** Again: I think it would be better if this info was conveyed via the 
SlotContext, using some Enum or flag
 * trackAllBucketsTermOrds looks like it's broken if/when a Resizer is used? 
... i don't see any resize calls for it (again: pretty sure this will break 
with DVHASH)

In general i feel like this patch jumps through a lot of hoops to try and 
"support" {{allBuckets}} on {{relatedness()}} – to the point of (IIUC?) a lot 
of new bookkeeping about when we've already accumulated this slot, but not for 
the same contextSlotOrd, in which case we merge it (in a way that i'm not sure 
at a glance is correct? .. it might be, but i think i saw a place where we 
overwrite a slot value with a new BUcketData w/o checking if it's non-null?) 
 but none of this really answers the question of whether this 
accumulated/merged {{relatedness()}} calculation is *meaningful* in the 
allBucket context?

That's really the question that should be asked before doing any work to make 
the code more complicated: Is there any semantic *meaning* behind returning a 
{{relatedness()}} score for the {{allBuckets:true}} situation?

I would argue the answer is "No" ...

{{relatedness()}} is inherently a computation of how the population of the 
_*SET*_ of documents in a bucket intersects with the population of the 
"foreground" and "background" sets of documents. {{allBuckets}} is fundamental 
about merging the a*ccumulated results* of many different buckets _even when 
the sets of documents in those buckets overlap_ due to the buckets 
corresponding with multivalued field.

I said before...
{quote}My gut says we should just using the base DocSet/domain for the entire 
facet as the slotContext in SpecialSlotAcc ...
{quote}
... but the more i think about it the more i feel like that would also be 
missleading, A user that explicitly requested {{allBuckets: true}} knowing that 
the resulting sub-stat calculations will be different then just computing stats 
on the "base" set, would then be misslead by {{relatedness()}} values that 
don't factor in counting the same document more then once because they occur in 
multiple buckets ... but likewise adding special logic to "double count" 
documents based on how many buckets they are in seems like it would violate the 
intent of {{relatedness()}} since the calculations are currently focused solely 
on the _set_ of documents, and don't consider things like "these buckets are 
based on term values, how many times does this term appear in each document?"

I think the safe thing to do (for now) is say " {{relatedness()}} is 
meaningless in {{allBuckets}} ." ... in a way that fixes the Server errors, and 
leave open 

[GitHub] [lucene-solr] megancarey commented on a change in pull request #1504: SOLR-14462: cache more than one autoscaling session

2020-05-20 Thread GitBox


megancarey commented on a change in pull request #1504:
URL: https://github.com/apache/lucene-solr/pull/1504#discussion_r428377565



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/PolicyHelper.java
##
@@ -382,45 +383,78 @@ static MapWriter loggingInfo(Policy policy, 
SolrCloudManager cloudManager, Sugge
   }
 
   public enum Status {
-NULL,
-//it is just created and not yet used or all operations on it has been 
completed fully
-UNUSED,
-COMPUTING, EXECUTING
+/**
+ * A command is actively using and modifying the session to compute 
placements
+ */
+COMPUTING,
+/**
+ * A command is not done yet processing its changes but no longer updates 
or even uses the session
+ */
+EXECUTING
   }
 
   /**
-   * This class stores a session for sharing purpose. If a process creates a 
session to
-   * compute operations,
-   * 1) see if there is a session that is available in the cache,
-   * 2) if yes, check if it is expired
-   * 3) if it is expired, create a new session
-   * 4) if it is not expired, borrow it
-   * 5) after computing operations put it back in the cache
+   * This class stores sessions for sharing purposes. If a process requires a 
session to
+   * compute operations:
+   * 
+   * see if there is an available non expired session in the cache,
+   * if yes, borrow it.
+   * if no, create a new one and borrow it.
+   * after computing (update) operations are done, {@link 
#returnSession(SessionWrapper)} back to the cache so it's
+   * again available for borrowing.
+   * after all borrowers are done computing then executing with the 
session, {@link #release(SessionWrapper)} it,
+   * which removes it from the cache.
+   * 
*/
   static class SessionRef {
+/**
+ * Lock protecting access to {@link #sessionWrapperSet} and to {@link 
#creationsInProgress}
+ */
 private final Object lockObj = new Object();
-private SessionWrapper sessionWrapper = SessionWrapper.DEFAULT_INSTANCE;
 
+/**
+ * Sessions currently in use in {@link Status#COMPUTING} or {@link 
Status#EXECUTING} states. As soon as all
+ * uses of a session are over, that session is removed from this set. 
Sessions not actively in use are NOT kept around.
+ *
+ * Access should only be done under the protection of {@link 
#lockObj}
+ */
+private Set sessionWrapperSet = 
Collections.newSetFromMap(new IdentityHashMap<>());
+
+
+/**
+ * Number of sessions currently being created but not yeet present in 
{@link #sessionWrapperSet}.

Review comment:
   Minor: "yeet"  

##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/PolicyHelper.java
##
@@ -429,87 +463,149 @@ private void release(SessionWrapper sessionWrapper) {
  * The session can be used by others while the caller is performing 
operations
  */
 private void returnSession(SessionWrapper sessionWrapper) {
-  TimeSource timeSource = sessionWrapper.session != null ? 
sessionWrapper.session.cloudManager.getTimeSource() : TimeSource.NANO_TIME;
+  boolean present;
   synchronized (lockObj) {
 sessionWrapper.status = Status.EXECUTING;
-if (log.isDebugEnabled()) {
-  log.debug("returnSession, curr-time {} sessionWrapper.createTime {}, 
this.sessionWrapper.createTime {} "
-  , time(timeSource, MILLISECONDS),
-  sessionWrapper.createTime,
-  this.sessionWrapper.createTime);
-}
-if (sessionWrapper.createTime == this.sessionWrapper.createTime) {
-  //this session was used for computing new operations and this can 
now be used for other
-  // computing
-  this.sessionWrapper = sessionWrapper;
+present = sessionWrapperSet.contains(sessionWrapper);
 
-  //one thread who is waiting for this need to be notified.
-  lockObj.notify();
-} else {
-  log.debug("create time NOT SAME {} ", 
SessionWrapper.DEFAULT_INSTANCE.createTime);
-  //else just ignore it
-}
+// wake up single thread waiting for a session return (ok if not woken 
up, wait is short)
+// Important to wake up a single one, otherwise of multiple waiting 
threads, all but one will immediately create new sessions
+lockObj.notify();
   }
 
+  // Logging
+  if (present) {
+if (log.isDebugEnabled()) {
+  log.debug("returnSession {}", sessionWrapper.getCreateTime());
+}
+  } else {
+log.warn("returning unknown session {} ", 
sessionWrapper.getCreateTime());
+  }
 }
 
-
-public SessionWrapper get(SolrCloudManager cloudManager) throws 
IOException, InterruptedException {
+/**
+ * Method returning an available session that can be used for {@link 
Status#COMPUTING}, either from the
+ * {@link #sessionWrapperSet} cache or by creating a new one. The status 
of the returned session is set 

[jira] [Resolved] (SOLR-14477) relatedness() values can be wrong when using 'prefix'

2020-05-20 Thread Chris M. Hostetter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris M. Hostetter resolved SOLR-14477.
---
Fix Version/s: 8.6
   Resolution: Fixed

> relatedness() values can be wrong when using 'prefix'
> -
>
> Key: SOLR-14477
> URL: https://issues.apache.org/jira/browse/SOLR-14477
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14477.patch, SOLR-14477.patch, SOLR-14477.patch
>
>
> Another {{relatedness()}} bug found in json facet's while working on 
> increased test coverage for SOLR-13132.
> if the {{prefix}} option is used when doing a terms facet, then the 
> {{relatedess()}} calculations can be wrong in some situations -- most notably 
> when using {{limit:-1}} but i'm pretty sure the bug also impacts the code 
> paths where the (first) {{sort}} (or {{prelim_sort}} is computed against the  
> {{relatedness()}} values.
> Real world impacts of this bug should be relatively low since i can't really 
> think of any practical usecases for using  {{relatedness()}} in conjunction 
> with {{prefix}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14492) many json.facet aggregations can throw ArrayIndexOutOfBoundsException when using DVHASH due to incorrect resize impl

2020-05-20 Thread Chris M. Hostetter (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris M. Hostetter resolved SOLR-14492.
---
Fix Version/s: 8.6
   Resolution: Fixed

> many json.facet aggregations can throw ArrayIndexOutOfBoundsException when 
> using DVHASH due to incorrect resize impl
> 
>
> Key: SOLR-14492
> URL: https://issues.apache.org/jira/browse/SOLR-14492
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14492.patch, SOLR-14492.patch
>
>
> It appears we have quite a few SlotAcc impls that don't properly implement 
> resize: they ask the {{Resizer}} to resize their arrays, but throw away the 
> result. (arrays can't be resized in place, the {{Resizer}} is designed to 
> return a new replacment map, initializing empty values and/or mapping old 
> indicies to new indicies)
> For many FacetFieldProcessors, this isn't (normally) a problem because they 
> create their Accs using a "max upper bound" on the possible number of slots 
> in advance -- and only use resize later to "shrink" the number of slots.
> But in the case of {{method:dvhash}} / FacetFieldProcessorByHashDV this 
> processor starts out using a number of slots based on the size of the base 
> DocSet (rounded up to the next power of 2) maxed out at 1024, and then 
> _grows_ the SlotAccs if it encounters more values then that.
> This means that if the "base" context of the term facet is significantly 
> smaller then the number of values in the docValues field being faceted on 
> (ie: multiValued fields), then these problematic SlotAccs won't grow properly 
> and you'll get ArrayIndexOutOfBoundsException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112618#comment-17112618
 ] 

Lucene/Solr QA commented on SOLR-14503:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m  2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m  2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m  2s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 44m 
46s{color} | {color:green} core in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 48m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-14503 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13003573/SOLR-14503.patch |
| Optional Tests |  validatesourcepatterns  compile  javac  unit  ratsources  
checkforbiddenapis  |
| uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 28209cb8b1f |
| ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/752/testReport/ |
| modules | C: solr solr/core U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/752/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch, SOLR-14503.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used even when you specify a different waitForZk value
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14500) currency function doesn't work for asymmetric rates

2020-05-20 Thread Murray Johnston (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112606#comment-17112606
 ] 

Murray Johnston commented on SOLR-14500:


I've also added a potential fix (solr14500.patch).  All existing tests, 
including the one added in test.patch pass.

> currency function doesn't work for asymmetric rates
> ---
>
> Key: SOLR-14500
> URL: https://issues.apache.org/jira/browse/SOLR-14500
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Murray Johnston
>Priority: Minor
> Attachments: solr14500.patch, test.patch
>
>
> Given a currency field of CurrencyFieldType, the following asymmetric rates:
> {code:java}
> 
>  {code}
> and a price field with a value of "24.50,SGD"
> The following usage of the currency function as a pseudo-field in a query 
> returns incorrect values:
> {code:java}
> curl -s 
> 'http://10.43.41.81:32080/solr/product_details/select?fl=price,price_sgd%3Acurrency(price,SGD)=id%3A57373P16=*%3A*=1'
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":0,
> "params":{
>   "q":"*:*",
>   "fl":"price,price_sgd:currency(price,SGD)",
>   "fq":"id:57373P16",
>   "rows":"1"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "price":"24.50,SGD",
> "price_sgd":25.74}]
>   }} {code}
> I have traced this to the fact that CurrencyFieldType.getValueSource returns 
> a value that is first converted to the default currency.  When dealing with 
> asymmetric rates this always risks introducing conversion errors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14442) bin/solr to attempt jstack before killing hung Solr instance

2020-05-20 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109245#comment-17109245
 ] 

Mikhail Khludnev edited comment on SOLR-14442 at 5/20/20, 8:18 PM:
---

Attached my fix for solr.cmd. 
1. it seems that current code doesn't stop process forcefully at least for this 
test script since (as I wrote above) soft killed process unbind from port, but 
keeps running. 
[~thelabdude], would you comment on this ^ observation? 

2. I can't combine IFs in batch properly, had to copypaste. I tried, better 
looking options doesn't work. 


was (Author: mkhludnev):
Attached my fix for solr.cmd. 
1. it seems that current code doesn't stop process forcefully at least for this 
test script since (as I wrote above) soft killed process unbind from port, but 
keeps running. 
[~timporter], would you comment on this ^ observation? 

2. I can't combine IFs in batch properly, had to copypaste. I tried, better 
looking options doesn't work. 

> bin/solr to attempt jstack before killing hung Solr instance
> 
>
> Key: SOLR-14442
> URL: https://issues.apache.org/jira/browse/SOLR-14442
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-14442.patch, SOLR-14442.patch, SOLR-14442.patch, 
> screenshot-1.png
>
>
> If a Solr instance did not respond to the 'stop' command in a timely manner 
> then the {{bin/solr}} script will attempt to forcefully kill it: 
> [https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.5.1/solr/bin/solr#L859]
> Gathering of information (e.g. a jstack of the java process) before the kill 
> command may be helpful in determining why the instance did not stop as 
> expected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14477) relatedness() values can be wrong when using 'prefix'

2020-05-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112550#comment-17112550
 ] 

ASF subversion and git services commented on SOLR-14477:


Commit 6755796ddf76bc25d61e6bb3988924ac8a0071ec in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6755796 ]

SOLR-14492: Fix ArrayIndexOutOfBoundsException in json.facet 'terms' when 
FacetFieldProcessorByHashDV is used with aggregations over multivalued numeric 
fields

SOLR-14477: Fix incorrect 'relatedness()' calculations in json.facet 'terms' 
when 'prefix' option is used
(cherry picked from commit 28209cb8b1fe2a4d8050e4877c4df2ad5d85509b)


> relatedness() values can be wrong when using 'prefix'
> -
>
> Key: SOLR-14477
> URL: https://issues.apache.org/jira/browse/SOLR-14477
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-14477.patch, SOLR-14477.patch, SOLR-14477.patch
>
>
> Another {{relatedness()}} bug found in json facet's while working on 
> increased test coverage for SOLR-13132.
> if the {{prefix}} option is used when doing a terms facet, then the 
> {{relatedess()}} calculations can be wrong in some situations -- most notably 
> when using {{limit:-1}} but i'm pretty sure the bug also impacts the code 
> paths where the (first) {{sort}} (or {{prelim_sort}} is computed against the  
> {{relatedness()}} values.
> Real world impacts of this bug should be relatively low since i can't really 
> think of any practical usecases for using  {{relatedness()}} in conjunction 
> with {{prefix}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14492) many json.facet aggregations can throw ArrayIndexOutOfBoundsException when using DVHASH due to incorrect resize impl

2020-05-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112549#comment-17112549
 ] 

ASF subversion and git services commented on SOLR-14492:


Commit 6755796ddf76bc25d61e6bb3988924ac8a0071ec in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6755796 ]

SOLR-14492: Fix ArrayIndexOutOfBoundsException in json.facet 'terms' when 
FacetFieldProcessorByHashDV is used with aggregations over multivalued numeric 
fields

SOLR-14477: Fix incorrect 'relatedness()' calculations in json.facet 'terms' 
when 'prefix' option is used
(cherry picked from commit 28209cb8b1fe2a4d8050e4877c4df2ad5d85509b)


> many json.facet aggregations can throw ArrayIndexOutOfBoundsException when 
> using DVHASH due to incorrect resize impl
> 
>
> Key: SOLR-14492
> URL: https://issues.apache.org/jira/browse/SOLR-14492
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-14492.patch, SOLR-14492.patch
>
>
> It appears we have quite a few SlotAcc impls that don't properly implement 
> resize: they ask the {{Resizer}} to resize their arrays, but throw away the 
> result. (arrays can't be resized in place, the {{Resizer}} is designed to 
> return a new replacment map, initializing empty values and/or mapping old 
> indicies to new indicies)
> For many FacetFieldProcessors, this isn't (normally) a problem because they 
> create their Accs using a "max upper bound" on the possible number of slots 
> in advance -- and only use resize later to "shrink" the number of slots.
> But in the case of {{method:dvhash}} / FacetFieldProcessorByHashDV this 
> processor starts out using a number of slots based on the size of the base 
> DocSet (rounded up to the next power of 2) maxed out at 1024, and then 
> _grows_ the SlotAccs if it encounters more values then that.
> This means that if the "base" context of the term facet is significantly 
> smaller then the number of values in the docValues field being faceted on 
> (ie: multiValued fields), then these problematic SlotAccs won't grow properly 
> and you'll get ArrayIndexOutOfBoundsException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112534#comment-17112534
 ] 

Mikhail Khludnev edited comment on SOLR-14419 at 5/20/20, 6:57 PM:
---

Query DSL objects need to go into dedicated {{queries}} property see SOLR-12490:
{code}
{ "query": { 
   "bool":{ "must": {"param":"must_clauses"}
  , "must_not":{"param":{"must_not_clauses"}}
  }},
  "queries": { 
  "must_clauses":["type:parent", "type2:parent"],
  "must_not_clauses" : {"bool": {...}}
}
}
{code}


was (Author: mkhludnev):
Query DSL objects need to go into dedicated {{queries}} property:
{code}
{ "query": { 
   "bool":{ "must": {"param":"must_clauses"}
  , "must_not":{"param":{"must_not_clauses"}}
  }},
  "queries": { 
  "must_clauses":["type:parent", "type2:parent"],
  "must_not_clauses" : {"bool": {...}}
}
}
{code}

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112536#comment-17112536
 ] 

Mikhail Khludnev edited comment on SOLR-14419 at 5/20/20, 6:56 PM:
---

bq. I mean I don't see many usecase this feature will be useful?
This is absolutely necessary. Think about that classic killer feature: 
https://lucene.apache.org/solr/guide/6_6/faceting.html#Faceting-TaggingandExcludingFilters
 the  question  how to achieve is with Query DSL for nested objects in json 
facets and domain switch. I recently get to solving this puzzle it's really 
tricky. I can share how to if you wish to see. Nevertheless, these refs are 
needful. 


was (Author: mkhludnev):
bq. I mean I don't see many usecase this feature will be useful?
This is absolutely necessary. Think about that classic killer feature: 
https://lucene.apache.org/solr/guide/6_6/faceting.html#Faceting-TaggingandExcludingFilters
 the  question  how to achieve is with Query DSL for nested objects in json 
facets and domain switch. I recently get to solving this puzzle it's really 
tricky. I can share how to if you wish to see. Nevertheless, those refs are 
needful. 

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112536#comment-17112536
 ] 

Mikhail Khludnev commented on SOLR-14419:
-

bq. I mean I don't see many usecase this feature will be useful?
This is absolutely necessary. Think about that classic killer feature: 
https://lucene.apache.org/solr/guide/6_6/faceting.html#Faceting-TaggingandExcludingFilters
 the  question  how to achieve is with Query DSL for nested objects in json 
facets and domain switch. I recently get to solving this puzzle it's really 
tricky. I can share how to if you wish to see. Nevertheless, those refs are 
needful. 

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112534#comment-17112534
 ] 

Mikhail Khludnev commented on SOLR-14419:
-

Query DSL objects need to go into dedicated {{queries}} property:
{code}
{ "query": { 
   "bool":{ "must": {"param":"must_clauses"}
  , "must_not":{"param":{"must_not_clauses"}}
  }},
  "queries": { 
  "must_clauses":["type:parent", "type2:parent"],
  "must_not_clauses" : {"bool": {...}}
}
}
{code}

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7788) fail precommit on unparameterised log messages and examine for wasted work/objects

2020-05-20 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-7788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112526#comment-17112526
 ] 

Erick Erickson commented on LUCENE-7788:


Actually I agree, but was proceeding on the theory that I didn't want to have 
to determine whether constructs like that were intentional or not. Had I just 
changed all the exceptions to the bare exception it would have been...fraught. 
I have no skin in the game as far as _leaving_ them that way so please change 
freely as you see fit on a case-by-case basis.

And it gets even worse. There are cases like
log.XXX("message {}", e.getCause(), e)

which faithfully reproduce what was there, but sure is ugly.

> fail precommit on unparameterised log messages and examine for wasted 
> work/objects
> --
>
> Key: LUCENE-7788
> URL: https://issues.apache.org/jira/browse/LUCENE-7788
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 8.6
>
> Attachments: LUCENE-7788.patch, LUCENE-7788.patch, gradle_only.patch, 
> gradle_only.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> SOLR-10415 would be removing existing unparameterised log.trace messages use 
> and once that is in place then this ticket's one-line change would be for 
> 'ant precommit' to reject any future unparameterised log.trace message use.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14477) relatedness() values can be wrong when using 'prefix'

2020-05-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112510#comment-17112510
 ] 

ASF subversion and git services commented on SOLR-14477:


Commit 28209cb8b1fe2a4d8050e4877c4df2ad5d85509b in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=28209cb ]

SOLR-14492: Fix ArrayIndexOutOfBoundsException in json.facet 'terms' when 
FacetFieldProcessorByHashDV is used with aggregations over multivalued numeric 
fields

SOLR-14477: Fix incorrect 'relatedness()' calculations in json.facet 'terms' 
when 'prefix' option is used


> relatedness() values can be wrong when using 'prefix'
> -
>
> Key: SOLR-14477
> URL: https://issues.apache.org/jira/browse/SOLR-14477
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-14477.patch, SOLR-14477.patch, SOLR-14477.patch
>
>
> Another {{relatedness()}} bug found in json facet's while working on 
> increased test coverage for SOLR-13132.
> if the {{prefix}} option is used when doing a terms facet, then the 
> {{relatedess()}} calculations can be wrong in some situations -- most notably 
> when using {{limit:-1}} but i'm pretty sure the bug also impacts the code 
> paths where the (first) {{sort}} (or {{prelim_sort}} is computed against the  
> {{relatedness()}} values.
> Real world impacts of this bug should be relatively low since i can't really 
> think of any practical usecases for using  {{relatedness()}} in conjunction 
> with {{prefix}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14492) many json.facet aggregations can throw ArrayIndexOutOfBoundsException when using DVHASH due to incorrect resize impl

2020-05-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112509#comment-17112509
 ] 

ASF subversion and git services commented on SOLR-14492:


Commit 28209cb8b1fe2a4d8050e4877c4df2ad5d85509b in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=28209cb ]

SOLR-14492: Fix ArrayIndexOutOfBoundsException in json.facet 'terms' when 
FacetFieldProcessorByHashDV is used with aggregations over multivalued numeric 
fields

SOLR-14477: Fix incorrect 'relatedness()' calculations in json.facet 'terms' 
when 'prefix' option is used


> many json.facet aggregations can throw ArrayIndexOutOfBoundsException when 
> using DVHASH due to incorrect resize impl
> 
>
> Key: SOLR-14492
> URL: https://issues.apache.org/jira/browse/SOLR-14492
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Chris M. Hostetter
>Assignee: Chris M. Hostetter
>Priority: Major
> Attachments: SOLR-14492.patch, SOLR-14492.patch
>
>
> It appears we have quite a few SlotAcc impls that don't properly implement 
> resize: they ask the {{Resizer}} to resize their arrays, but throw away the 
> result. (arrays can't be resized in place, the {{Resizer}} is designed to 
> return a new replacment map, initializing empty values and/or mapping old 
> indicies to new indicies)
> For many FacetFieldProcessors, this isn't (normally) a problem because they 
> create their Accs using a "max upper bound" on the possible number of slots 
> in advance -- and only use resize later to "shrink" the number of slots.
> But in the case of {{method:dvhash}} / FacetFieldProcessorByHashDV this 
> processor starts out using a number of slots based on the size of the base 
> DocSet (rounded up to the next power of 2) maxed out at 1024, and then 
> _grows_ the SlotAccs if it encounters more values then that.
> This means that if the "base" context of the term facet is significantly 
> smaller then the number of values in the docValues field being faceted on 
> (ie: multiValued fields), then these problematic SlotAccs won't grow properly 
> and you'll get ArrayIndexOutOfBoundsException



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112489#comment-17112489
 ] 

Colvin Cowie commented on SOLR-14503:
-

Updated the patch with some changes to {{ZkFailoverTest}} which fail without 
the constructor change and pass with it.

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch, SOLR-14503.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used even when you specify a different waitForZk value
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112472#comment-17112472
 ] 

Colvin Cowie edited comment on SOLR-14503 at 5/20/20, 5:57 PM:
---

I see {{ZkFailoverTest}} was added for SOLR-5129, but because it does 
{{}}{{Thread.sleep({color:#ff}5000{color});}} with {{waitForZk}} set to 60 
it doesn't stop the zk server for long enough for it to exceed either the 
configured timeout or the unconfigured DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 
seconds.

-I've tried modifying the test to cover both a successful start and the 
configured timeout being exceeded, but I can't quite get it to work with both 
cases at the same time since I seem to end up with the server dead when the 
second test starts, and I'm not familiar enough with way these tests are 
written to know what the right way to write these tests is.-

-If I simply duplicate the existing test method so that there's two test cases 
doing the same thing, it also fails. So it's not specific to the case that I'm 
adding.-

 

Edit: I see, it's because {{ZkFailoverTest}} is a SolrCloudTestCase and the 
zookeeper is left shutdown at the end of the test, but no new instance is 
created at the start of the next test


was (Author: cjcowie):
I see {{ZkFailoverTest}} was added for SOLR-5129, but because it does 
{{}}{{Thread.sleep({color:#ff}5000{color});}} with {{waitForZk}} set to 60 
it doesn't stop the zk server for long enough for it to exceed either the 
configured timeout or the unconfigured DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 
seconds.

I've tried modifying the test to cover both a successful start and the 
configured timeout being exceeded, but I can't quite get it to work with both 
cases at the same time since I seem to end up with the server dead when the 
second test starts, and I'm not familiar enough with way these tests are 
written to know what the right way to write these tests is.

If I simply duplicate the existing test method so that there's two test cases 
doing the same thing, it also fails. So it's not specific to the case that I'm 
adding. [^flawed-test.patch]

 

Edit: I see, it's because {{ZkFailoverTest}} is a SolrCloudTestCase and the 
zookeeper is left shutdown at the end of the test, but no new instance is 
created at the start of the next test

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch, SOLR-14503.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used even when you specify a different waitForZk value
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14503:

Attachment: (was: flawed-test.patch)

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch, SOLR-14503.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used even when you specify a different waitForZk value
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14503:

Attachment: SOLR-14503.patch

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch, SOLR-14503.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used even when you specify a different waitForZk value
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112472#comment-17112472
 ] 

Colvin Cowie edited comment on SOLR-14503 at 5/20/20, 5:42 PM:
---

I see {{ZkFailoverTest}} was added for SOLR-5129, but because it does 
{{}}{{Thread.sleep({color:#ff}5000{color});}} with {{waitForZk}} set to 60 
it doesn't stop the zk server for long enough for it to exceed either the 
configured timeout or the unconfigured DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 
seconds.

I've tried modifying the test to cover both a successful start and the 
configured timeout being exceeded, but I can't quite get it to work with both 
cases at the same time since I seem to end up with the server dead when the 
second test starts, and I'm not familiar enough with way these tests are 
written to know what the right way to write these tests is.

If I simply duplicate the existing test method so that there's two test cases 
doing the same thing, it also fails. So it's not specific to the case that I'm 
adding. [^flawed-test.patch]

 

Edit: I see, it's because {{ZkFailoverTest}} is a SolrCloudTestCase and the 
zookeeper is left shutdown at the end of the test, but no new instance is 
created at the start of the next test


was (Author: cjcowie):
I see {{ZkFailoverTest}} was added for SOLR-5129, but because it does 
{{}}{{Thread.sleep({color:#ff}5000{color});}} with {{waitForZk}} set to 60 
it doesn't stop the zk server for long enough for it to exceed either the 
configured timeout or the unconfigured DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 
seconds.

I've tried modifying the test to cover both a successful start and the 
configured timeout being exceeded, but I can't quite get it to work with both 
cases at the same time since I seem to end up with the server dead when the 
second test starts, and I'm not familiar enough with way these tests are 
written to know what the right way to write these tests is.

If I simply duplicate the existing test method so that there's two test cases 
doing the same thing, it also fails. So it's not specific to the case that I'm 
adding. [^flawed-test.patch]

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch, flawed-test.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used even when you specify a different waitForZk value
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112472#comment-17112472
 ] 

Colvin Cowie commented on SOLR-14503:
-

I see {{ZkFailoverTest}} was added for SOLR-5129, but because it does 
{{}}{{Thread.sleep({color:#ff}5000{color});}} with {{waitForZk}} set to 60 
it doesn't stop the zk server for long enough for it to exceed either the 
configured timeout or the unconfigured DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 
seconds.

I've tried modifying the test to cover both a successful start and the 
configured timeout being exceeded, but I can't quite get it to work with both 
cases at the same time since I seem to end up with the server dead when the 
second test starts, and I'm not familiar enough with way these tests are 
written to know what the right way to write these tests is.

If I simply duplicate the existing test method so that there's two test cases 
doing the same thing, it also fails. So it's not specific to the case that I'm 
adding. [^flawed-test.patch]

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch, flawed-test.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used even when you specify a different waitForZk value
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14503:

Attachment: (was: SOLR-14503-flawed-test.patch)

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch, flawed-test.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used even when you specify a different waitForZk value
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14503:

Attachment: flawed-test.patch

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch, flawed-test.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used even when you specify a different waitForZk value
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14503:

Attachment: SOLR-14503-flawed-test.patch

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503-flawed-test.patch, SOLR-14503.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used even when you specify a different waitForZk value
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14462) Autoscaling placement wrong with concurrent collection creations

2020-05-20 Thread Ilan Ginzburg (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14462?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112435#comment-17112435
 ] 

Ilan Ginzburg commented on SOLR-14462:
--

I took into account the comments and updated the PR (+ rebase). 
[https://github.com/apache/lucene-solr/pull/1504]

This includes now changes to the way new Sessions are created when needed, by 
+taking the actual creation outside of the critical section+. Performance tests 
run elsewhere showed that with a large number of collections in a cluster 
creating the session could take quite some time, and because it is serialized 
with existing implementation, these times add up when multiple commands are run 
(and running 100 commands concurrently is supported in Overseer).

With the proposal here, creations can happen concurrently. There's a random 
wait delay used to wait for sessions to be returned if there already exists 
cached sessions OR if sessions are in the process of being created. This avoid 
the thundering herd effect of all waiting threads making the same decision at 
the same time and ending up creating a large number of sessions.

> Autoscaling placement wrong with concurrent collection creations
> 
>
> Key: SOLR-14462
> URL: https://issues.apache.org/jira/browse/SOLR-14462
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: master (9.0)
>Reporter: Ilan Ginzburg
>Assignee: Noble Paul
>Priority: Major
> Attachments: PolicyHelperNewLogs.txt, policylogs.txt
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Under concurrent collection creation, wrong Autoscaling placement decisions 
> can lead to severely unbalanced clusters.
>  Sequential creation of the same collections is handled correctly and the 
> cluster is balanced.
> *TL;DR;* under high load, the way sessions that cache future changes to 
> Zookeeper are managed cause placement decisions of multiple concurrent 
> Collection API calls to ignore each other, be based on identical “initial” 
> cluster state, possibly leading to identical placement decisions and as a 
> consequence cluster imbalance.
> *Some context first* for those less familiar with how Autoscaling deals with 
> cluster state change: a PolicyHelper.Session is created with a snapshot of 
> the Zookeeper cluster state and is used to track already decided but not yet 
> persisted to Zookeeper cluster state changes so that Collection API commands 
> can make the right placement decisions.
>  A Collection API command either uses an existing cached Session (that 
> includes changes computed by previous command(s)) or creates a new Session 
> initialized from the Zookeeper cluster state (i.e. with only state changes 
> already persisted).
>  When a Collection API command requires a Session - and one is needed for any 
> cluster state update computation - if one exists but is currently in use, the 
> command can wait up to 10 seconds. If the session becomes available, it is 
> reused. Otherwise, a new one is created.
> The Session lifecycle is as follows: it is created in COMPUTING state by a 
> Collection API command and is initialized with a snapshot of cluster state 
> from Zookeeper (does not require a Zookeeper read, this is running on 
> Overseer that maintains a cache of cluster state). The command has exclusive 
> access to the Session and can change the state of the Session. When the 
> command is done changing the Session, the Session is “returned” and its state 
> changes to EXECUTING while the command continues to run to persist the state 
> to Zookeeper and interact with the nodes, but no longer interacts with the 
> Session. Another command can then grab a Session in EXECUTING state, change 
> its state to COMPUTING to compute new changes taking into account previous 
> changes. When all commands having used the session have completed their work, 
> the session is “released” and destroyed (at this stage, Zookeeper contains 
> all the state changes that were computed using that Session).
> The issue arises when multiple Collection API commands are executed at once. 
> A first Session is created and commands start using it one by one. In a 
> simple 1 shard 1 replica collection creation test run with 100 parallel 
> Collection API requests (see debug logs from PolicyHelper in file 
> policy.logs), this Session update phase (Session in COMPUTING status in 
> SessionWrapper) takes about 250-300ms (MacBook Pro).
> This means that about 40 commands can run by using in turn the same Session 
> (45 in the sample run). The commands that have been waiting for too long time 
> out after 10 seconds, more or less all at the same time (at the rate at which 
> they have 

[jira] [Updated] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14503:

Description: 
When starting Solr in cloud mode, if zookeeper is not available within 30 
seconds, then core container intialization fails and the node will not recover 
when zookeeper is available.

 

I believe SOLR-5129 should have addressed this issue, however it doesn't quite 
do so for two reasons:
 # 
[https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
 it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} rather 
than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
is used even when you specify a different waitForZk value
 # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
environment property 
[https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
appears in the solr.in.cmd as an example.

 

I will attach a patch that fixes the above.

  was:
When starting Solr in cloud mode, if zookeeper is not available within 30 
seconds, then core container intialization fails and the node will not recover 
when zookeeper is available.

 

I believe SOLR-5129 should have addressed this issue, however it doesn't quite 
do so for two reasons:
 # 
[https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
 it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} rather 
than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
is used
 # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
environment property 
[https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
appears in the solr.in.cmd as an example.

 

I will attach a patch that fixes the above.


> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used even when you specify a different waitForZk value
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14503:

Status: Patch Available  (was: Open)

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14503:

Attachment: SOLR-14503.patch

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
> Attachments: SOLR-14503.patch
>
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14503:

Affects Version/s: 7.0.1
   7.1
   7.2
   7.2.1
   7.3
   7.3.1
   7.4
   7.5
   7.6
   7.7
   7.7.1
   7.7.2
   8.0

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0.1, 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 
> 7.7.1, 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colvin Cowie updated SOLR-14503:

Affects Version/s: (was: 7.0.1)

> Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property
> ---
>
> Key: SOLR-14503
> URL: https://issues.apache.org/jira/browse/SOLR-14503
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1, 7.2, 7.2.1, 7.3, 7.3.1, 7.4, 7.5, 7.6, 7.7, 7.7.1, 
> 7.7.2, 8.0, 8.1, 8.2, 7.7.3, 8.1.1, 8.3, 8.4, 8.3.1, 8.5, 8.4.1, 8.5.1
>Reporter: Colvin Cowie
>Priority: Minor
>
> When starting Solr in cloud mode, if zookeeper is not available within 30 
> seconds, then core container intialization fails and the node will not 
> recover when zookeeper is available.
>  
> I believe SOLR-5129 should have addressed this issue, however it doesn't 
> quite do so for two reasons:
>  # 
> [https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
>  it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} 
> rather than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
> zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
> is used
>  # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
> environment property 
> [https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
> there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
> appears in the solr.in.cmd as an example.
>  
> I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14503) Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) property

2020-05-20 Thread Colvin Cowie (Jira)
Colvin Cowie created SOLR-14503:
---

 Summary: Solr does not respect waitForZk (SOLR_WAIT_FOR_ZK) 
property
 Key: SOLR-14503
 URL: https://issues.apache.org/jira/browse/SOLR-14503
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 8.5.1, 8.4.1, 8.5, 8.3.1, 8.4, 8.3, 8.1.1, 7.7.3, 8.2, 8.1
Reporter: Colvin Cowie


When starting Solr in cloud mode, if zookeeper is not available within 30 
seconds, then core container intialization fails and the node will not recover 
when zookeeper is available.

 

I believe SOLR-5129 should have addressed this issue, however it doesn't quite 
do so for two reasons:
 # 
[https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java#L297]
 it calls {{SolrZkClient(String zkServerAddress, int zkClientTimeout)}} rather 
than {{SolrZkClient(String zkServerAddress, int zkClientTimeout, int 
zkClientConnectTimeout)}} so the DEFAULT_CLIENT_CONNECT_TIMEOUT of 30 seconds 
is used
 # bin/solr contains script to set -DwaitForZk from the SOLR_WAIT_FOR_ZK 
environment property 
[https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L2148] but 
there is no corresponding assignment in bin/solr.cmd, while SOLR_WAIT_FOR_ZK 
appears in the solr.in.cmd as an example.

 

I will attach a patch that fixes the above.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14500) currency function doesn't work for asymmetric rates

2020-05-20 Thread Murray Johnston (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Murray Johnston updated SOLR-14500:
---
Attachment: solr14500.patch

> currency function doesn't work for asymmetric rates
> ---
>
> Key: SOLR-14500
> URL: https://issues.apache.org/jira/browse/SOLR-14500
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Murray Johnston
>Priority: Minor
> Attachments: solr14500.patch, test.patch
>
>
> Given a currency field of CurrencyFieldType, the following asymmetric rates:
> {code:java}
> 
>  {code}
> and a price field with a value of "24.50,SGD"
> The following usage of the currency function as a pseudo-field in a query 
> returns incorrect values:
> {code:java}
> curl -s 
> 'http://10.43.41.81:32080/solr/product_details/select?fl=price,price_sgd%3Acurrency(price,SGD)=id%3A57373P16=*%3A*=1'
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":0,
> "params":{
>   "q":"*:*",
>   "fl":"price,price_sgd:currency(price,SGD)",
>   "fq":"id:57373P16",
>   "rows":"1"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "price":"24.50,SGD",
> "price_sgd":25.74}]
>   }} {code}
> I have traced this to the fact that CurrencyFieldType.getValueSource returns 
> a value that is first converted to the default currency.  When dealing with 
> asymmetric rates this always risks introducing conversion errors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14500) currency function doesn't work for asymmetric rates

2020-05-20 Thread Murray Johnston (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111552#comment-17111552
 ] 

Murray Johnston edited comment on SOLR-14500 at 5/20/20, 4:00 PM:
--

I've added a patch (test.patch) to illustrate the issue


was (Author: mjohnston):
I've added a patch to illustrate the issue

> currency function doesn't work for asymmetric rates
> ---
>
> Key: SOLR-14500
> URL: https://issues.apache.org/jira/browse/SOLR-14500
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Murray Johnston
>Priority: Minor
> Attachments: solr14500.patch, test.patch
>
>
> Given a currency field of CurrencyFieldType, the following asymmetric rates:
> {code:java}
> 
>  {code}
> and a price field with a value of "24.50,SGD"
> The following usage of the currency function as a pseudo-field in a query 
> returns incorrect values:
> {code:java}
> curl -s 
> 'http://10.43.41.81:32080/solr/product_details/select?fl=price,price_sgd%3Acurrency(price,SGD)=id%3A57373P16=*%3A*=1'
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":0,
> "params":{
>   "q":"*:*",
>   "fl":"price,price_sgd:currency(price,SGD)",
>   "fq":"id:57373P16",
>   "rows":"1"}},
>   "response":{"numFound":1,"start":0,"docs":[
>   {
> "price":"24.50,SGD",
> "price_sgd":25.74}]
>   }} {code}
> I have traced this to the fact that CurrencyFieldType.getValueSource returns 
> a value that is first converted to the default currency.  When dealing with 
> asymmetric rates this always risks introducing conversion errors.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9374) Port check-broken-links to gradle

2020-05-20 Thread Tomoko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida resolved LUCENE-9374.
---
Fix Version/s: master (9.0)
   Resolution: Fixed

> Port check-broken-links to gradle
> -
>
> Key: LUCENE-9374
> URL: https://issues.apache.org/jira/browse/LUCENE-9374
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: master (9.0)
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> This is a sub-task of LUCENE-9321; adds a gradle task "checkBrokenLinks" that 
> verifies links in the entire documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9374) Port check-broken-links to gradle

2020-05-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112266#comment-17112266
 ] 

ASF subversion and git services commented on LUCENE-9374:
-

Commit 84ea0cb87dd7071648bd8efb97644f2af148fa7c in lucene-solr's branch 
refs/heads/master from Tomoko Uchida
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=84ea0cb ]

LUCENE-9374: Add checkBrokenLinks gradle task (#1522)



> Port check-broken-links to gradle
> -
>
> Key: LUCENE-9374
> URL: https://issues.apache.org/jira/browse/LUCENE-9374
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: master (9.0)
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Major
>  Time Spent: 6h
>  Remaining Estimate: 0h
>
> This is a sub-task of LUCENE-9321; adds a gradle task "checkBrokenLinks" that 
> verifies links in the entire documentation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mocobeta merged pull request #1522: LUCENE-9374: Add checkBrokenLinks gradle task

2020-05-20 Thread GitBox


mocobeta merged pull request #1522:
URL: https://github.com/apache/lucene-solr/pull/1522


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13072) Management of markers for nodeLost / nodeAdded events is broken

2020-05-20 Thread Colvin Cowie (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112254#comment-17112254
 ] 

Colvin Cowie commented on SOLR-13072:
-

Hi [~ab] I've seen intermittent NullPointerExceptions in 
org.apache.solr.cloud.ZkController.registerLiveNodesListener() which was added 
by this issue. I sent an email to the dev mailing list, if you could have a 
chance to look at it. Thanks in advance

> Management of markers for nodeLost / nodeAdded events is broken
> ---
>
> Key: SOLR-13072
> URL: https://issues.apache.org/jira/browse/SOLR-13072
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling
>Affects Versions: 7.5, 7.6, 8.0
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 7.7, 8.0, master (9.0)
>
>
> In order to prevent {{nodeLost}} events from being lost when it's the 
> Overseer leader that is the node that was lost a mechanism was added to 
> record markers for these events by any other live node, in 
> {{ZkController.registerLiveNodesListener()}}. As similar mechanism also 
> exists for {{nodeAdded}} events.
> On Overseer leader restart if the autoscaling configuration didn't contain 
> any triggers that consume {{nodeLost}} events then these markers are removed. 
> If there are 1 or more trigger configs that consume {{nodeLost}} events then 
> these triggers would read the markers, remove them and generate appropriate 
> events.
> However, as the {{NodeMarkersRegistrationTest}} shows this mechanism is 
> broken and susceptible to race conditions.
> It's not unusual to have more than 1 {{nodeLost}} trigger because in addition 
> to any user-defined triggers there's always one that is automatically defined 
> if missing: {{.auto_add_replicas}}. However, if there's more than 1 
> {{nodeLost}} trigger then the process of consuming and removing the markers 
> becomes non-deterministic - each trigger may pick up (and delete) all, none, 
> or some of the markers.
> So as it is now this mechanism is broken if more than 1 {{nodeLost}} or more 
> than 1 {{nodeAdded}} trigger is defined.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14142) Enable jetty's request log by default

2020-05-20 Thread Jason Gerlowski (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112240#comment-17112240
 ] 

Jason Gerlowski commented on SOLR-14142:


Conceptually it's a different issue, but in practice tackling them separately 
opens the door to shipping Solr with a default setup that would happily eat 
user's disk on a busy cluster.

Tackling better request-log defaults can be its own jira, no objections there.  
But that ticket should be a pre-req for this one.  We shouldn't turn something 
on by default until it's configured in a way that won't bite users OOTB.

> Enable jetty's request log by default
> -
>
> Key: SOLR-14142
> URL: https://issues.apache.org/jira/browse/SOLR-14142
> Project: Solr
>  Issue Type: Improvement
>Reporter: Robert Muir
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: SOLR-14142.patch, SOLR-14142.patch
>
>
> I'd like to enable the jetty request log by default.
> This log is now in the correct directory, it no longer uses the deprecated 
> mechanisms (it is asynclogwriter + customformat), etc. See SOLR-14138.
> This log is in a standard format (NCSA) which is supported by tools 
> out-of-box. It does not contain challenges such as java exceptions and is 
> easy to work with. Without it enabled, solr really has insufficient logging 
> (e.g. no IP addresses).
> If someone's solr gets hacked, its only fair they at least get to see who did 
> it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14484) NPE in ConcurrentUpdateHttp2SolrClient MDC logging

2020-05-20 Thread David Smiley (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-14484:

Fix Version/s: 8.6
 Assignee: David Smiley
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks for contributing Andras!

> NPE in ConcurrentUpdateHttp2SolrClient MDC logging
> --
>
> Key: SOLR-14484
> URL: https://issues.apache.org/jira/browse/SOLR-14484
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.4.1
>Reporter: Andras Salamon
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.6
>
> Attachments: SOLR-14484-01.patch, SOLR-14484-02.patch
>
>
> {{client.getBaseURL()}} can be null in {{ConcurrentUpdateHttp2SolrClient}} 
> which can cause problems in MDC logging.
> We had the following error in the stacktrace. We were using Solr 8.4.1 from 
> lily hbase-indexer which still uses log4j 1.2:
> {noformat}
> Error from server at http://127.0.0.1:45895/solr/collection1: 
> java.lang.NullPointerException
>  at java.util.Hashtable.put(Hashtable.java:459)
>  at org.apache.log4j.MDC.put0(MDC.java:150)
>  at org.apache.log4j.MDC.put(MDC.java:85)
>  at org.slf4j.impl.Log4jMDCAdapter.put(Log4jMDCAdapter.java:67)
>  at org.slf4j.MDC.put(MDC.java:147)
>  at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.addRunner(ConcurrentUpdateHttp2SolrClient.java:346)
>  at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.waitForEmptyQueue(ConcurrentUpdateHttp2SolrClient.java:565)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14484) NPE in ConcurrentUpdateHttp2SolrClient MDC logging

2020-05-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112220#comment-17112220
 ] 

ASF subversion and git services commented on SOLR-14484:


Commit ec71a6b4540c0106d4bcb61e0d0d1e20c9f57973 in lucene-solr's branch 
refs/heads/branch_8x from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ec71a6b ]

SOLR-14484: avoid putting null into MDC
Co-authored-by: Andras Salamon

(cherry picked from commit 2ac640f9d066ebd88f4b5ebd1036792bdbf171bc)


> NPE in ConcurrentUpdateHttp2SolrClient MDC logging
> --
>
> Key: SOLR-14484
> URL: https://issues.apache.org/jira/browse/SOLR-14484
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.4.1
>Reporter: Andras Salamon
>Priority: Minor
> Attachments: SOLR-14484-01.patch, SOLR-14484-02.patch
>
>
> {{client.getBaseURL()}} can be null in {{ConcurrentUpdateHttp2SolrClient}} 
> which can cause problems in MDC logging.
> We had the following error in the stacktrace. We were using Solr 8.4.1 from 
> lily hbase-indexer which still uses log4j 1.2:
> {noformat}
> Error from server at http://127.0.0.1:45895/solr/collection1: 
> java.lang.NullPointerException
>  at java.util.Hashtable.put(Hashtable.java:459)
>  at org.apache.log4j.MDC.put0(MDC.java:150)
>  at org.apache.log4j.MDC.put(MDC.java:85)
>  at org.slf4j.impl.Log4jMDCAdapter.put(Log4jMDCAdapter.java:67)
>  at org.slf4j.MDC.put(MDC.java:147)
>  at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.addRunner(ConcurrentUpdateHttp2SolrClient.java:346)
>  at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.waitForEmptyQueue(ConcurrentUpdateHttp2SolrClient.java:565)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14484) NPE in ConcurrentUpdateHttp2SolrClient MDC logging

2020-05-20 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112218#comment-17112218
 ] 

ASF subversion and git services commented on SOLR-14484:


Commit 2ac640f9d066ebd88f4b5ebd1036792bdbf171bc in lucene-solr's branch 
refs/heads/master from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2ac640f ]

SOLR-14484: avoid putting null into MDC
Co-authored-by: Andras Salamon


> NPE in ConcurrentUpdateHttp2SolrClient MDC logging
> --
>
> Key: SOLR-14484
> URL: https://issues.apache.org/jira/browse/SOLR-14484
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.4.1
>Reporter: Andras Salamon
>Priority: Minor
> Attachments: SOLR-14484-01.patch, SOLR-14484-02.patch
>
>
> {{client.getBaseURL()}} can be null in {{ConcurrentUpdateHttp2SolrClient}} 
> which can cause problems in MDC logging.
> We had the following error in the stacktrace. We were using Solr 8.4.1 from 
> lily hbase-indexer which still uses log4j 1.2:
> {noformat}
> Error from server at http://127.0.0.1:45895/solr/collection1: 
> java.lang.NullPointerException
>  at java.util.Hashtable.put(Hashtable.java:459)
>  at org.apache.log4j.MDC.put0(MDC.java:150)
>  at org.apache.log4j.MDC.put(MDC.java:85)
>  at org.slf4j.impl.Log4jMDCAdapter.put(Log4jMDCAdapter.java:67)
>  at org.slf4j.MDC.put(MDC.java:147)
>  at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.addRunner(ConcurrentUpdateHttp2SolrClient.java:346)
>  at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.waitForEmptyQueue(ConcurrentUpdateHttp2SolrClient.java:565)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #1512: SOLR-13325: Add a collection selector to ComputePlanAction

2020-05-20 Thread GitBox


sigram commented on a change in pull request #1512:
URL: https://github.com/apache/lucene-solr/pull/1512#discussion_r428011261



##
File path: 
solr/core/src/java/org/apache/solr/cloud/autoscaling/ComputePlanAction.java
##
@@ -17,38 +17,28 @@
 
 package org.apache.solr.cloud.autoscaling;
 
-import java.io.IOException;
-import java.lang.invoke.MethodHandles;
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Collections;
-import java.util.HashSet;
-import java.util.List;
-import java.util.Map;
-import java.util.Set;
-import java.util.concurrent.atomic.AtomicInteger;
-
 import org.apache.solr.client.solrj.SolrRequest;
 import org.apache.solr.client.solrj.cloud.SolrCloudManager;
-import org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig;
-import org.apache.solr.client.solrj.cloud.autoscaling.NoneSuggester;
-import org.apache.solr.client.solrj.cloud.autoscaling.Policy;
-import org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper;
-import org.apache.solr.client.solrj.cloud.autoscaling.Suggester;
-import org.apache.solr.client.solrj.cloud.autoscaling.UnsupportedSuggester;
+import org.apache.solr.client.solrj.cloud.autoscaling.*;
 import org.apache.solr.common.SolrException;
 import org.apache.solr.common.cloud.ClusterState;
 import org.apache.solr.common.cloud.DocCollection;
 import org.apache.solr.common.cloud.Replica;
 import org.apache.solr.common.params.AutoScalingParams;
 import org.apache.solr.common.params.CollectionParams;
-import org.apache.solr.common.params.CoreAdminParams;
 import org.apache.solr.common.util.Pair;
 import org.apache.solr.common.util.StrUtils;
 import org.apache.solr.core.SolrResourceLoader;
 import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
+import java.io.IOException;
+import java.lang.invoke.MethodHandles;
+import java.util.*;

Review comment:
   Right, there's no formal rule. I checked the expanded list of imports 
here - indeed it's very long.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] sigram commented on a change in pull request #1512: SOLR-13325: Add a collection selector to ComputePlanAction

2020-05-20 Thread GitBox


sigram commented on a change in pull request #1512:
URL: https://github.com/apache/lucene-solr/pull/1512#discussion_r428005966



##
File path: solr/solr-ref-guide/src/solrcloud-autoscaling-trigger-actions.adoc
##
@@ -29,12 +29,13 @@ commands which can re-balance the cluster in response to 
trigger events.
 The following parameters are configurable:
 
 `collections`::
-A comma-separated list of collection names. If this list is not empty then
-the computed operations will only calculate collection operations that affect
-listed collections and ignore any other collection operations for collections
+A comma-separated list of collection names. This can also be a selector on the 
collection property e.g. `collections: {'policy': 'my_custom_policy'}` will 
match all collections which use the policy named `my_customer_policy`.

Review comment:
   There's a typo (name mismatch) in the policy name in the doc - 
'my_custom_policy' vs. 'my_customER_policy'.
   
   One other possible property that comes to my mind would be 'router' to 
separate collections with TRA / CRA .. but in this case the collection could 
equally well be set up with its own policy - so for now maybe there's no other 
property that makes sense. :)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14442) bin/solr to attempt jstack before killing hung Solr instance

2020-05-20 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109245#comment-17109245
 ] 

Mikhail Khludnev edited comment on SOLR-14442 at 5/20/20, 1:17 PM:
---

Attached my fix for solr.cmd. 
1. it seems that current code doesn't stop process forcefully at least for this 
test script since (as I wrote above) soft killed process unbind from port, but 
keeps running. 
[~timporter], would you comment on this ^ observation? 

2. I can't combine IFs in batch properly, had to copypaste. I tried, better 
looking options doesn't work. 


was (Author: mkhludnev):
Attached my fix for solr.cmd. 
1. it seems that current code doesn't stop process forcefully at least for this 
test script since (as I wrote above) soft killed process unbind from port, but 
keeps running. 
2. I can't combine IFs in batch properly, had to copypaste. I tried, better 
looking options doesn't work. 

> bin/solr to attempt jstack before killing hung Solr instance
> 
>
> Key: SOLR-14442
> URL: https://issues.apache.org/jira/browse/SOLR-14442
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-14442.patch, SOLR-14442.patch, SOLR-14442.patch, 
> screenshot-1.png
>
>
> If a Solr instance did not respond to the 'stop' command in a timely manner 
> then the {{bin/solr}} script will attempt to forcefully kill it: 
> [https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.5.1/solr/bin/solr#L859]
> Gathering of information (e.g. a jstack of the java process) before the kill 
> command may be helpful in determining why the instance did not stop as 
> expected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112182#comment-17112182
 ] 

Cao Manh Dat edited comment on SOLR-14419 at 5/20/20, 1:15 PM:
---

When I say paramValue as a JsonObject, I mean this
{code:json}
{ "query": { 
   "bool":{ "must": {"param":"must_clauses"}
  , "must_not":{"param":{"must_not_clauses"}}
  }},
  "params": { 
  "must_clauses":["type:parent", "type2:parent"],
  "must_not_clauses" : {"bool": {...}}
}
}
 {code}
 


was (Author: caomanhdat):
When I say paramValue as a JsonObject, I mean this
{ "query": { "bool":{ "must":{"param":"must_clauses"}, 
"must_not":{"param":\{"must_not_clauses"  "params": {  
"must_clauses":["type:parent", "type2:parent"],
  "must_not_clauses" : \{"bool": {...}}
   }
}

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112182#comment-17112182
 ] 

Cao Manh Dat commented on SOLR-14419:
-

When I say paramValue as a JsonObject, I mean this
{ "query": { "bool":{ "must":{"param":"must_clauses"}, 
"must_not":{"param":\{"must_not_clauses"  "params": {  
"must_clauses":["type:parent", "type2:parent"],
  "must_not_clauses" : \{"bool": {...}}
   }
}

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14442) bin/solr to attempt jstack before killing hung Solr instance

2020-05-20 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112179#comment-17112179
 ] 

Mikhail Khludnev commented on SOLR-14442:
-

I suppose {{jstack}} is Improvement; using {{qprocess}} is a bugfix. 

> bin/solr to attempt jstack before killing hung Solr instance
> 
>
> Key: SOLR-14442
> URL: https://issues.apache.org/jira/browse/SOLR-14442
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-14442.patch, SOLR-14442.patch, SOLR-14442.patch, 
> screenshot-1.png
>
>
> If a Solr instance did not respond to the 'stop' command in a timely manner 
> then the {{bin/solr}} script will attempt to forcefully kill it: 
> [https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.5.1/solr/bin/solr#L859]
> Gathering of information (e.g. a jstack of the java process) before the kill 
> command may be helpful in determining why the instance did not stop as 
> expected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112171#comment-17112171
 ] 

Mikhail Khludnev commented on SOLR-14419:
-

bq. It will be nice if paramValue is a JsonObject, then recurisve dependency 
will be a problem.
I'm a little bit lost.. So, these refs might refer to objects in Query DSL like 
in the patch 
{code}
+);// referencing dsl from filters objs
+client.testJQ( 
params("json.filter","{param:fq1}","json.filter","{param:fq2}",
+"json", random().nextBoolean() ?
+ "{queries:{fq1:{lucene:{query:'cat_s:A'}}, 
fq2:{lucene:{query:'where_s:NY'" : 
{code}
But, as I told there shouldn't be a problem with recursion.

bq. Then how the tranditional local params solve that problem?
Single quotes escapes $.
{code}
"rawquerystring":"{!v='$foo'}",
"querystring":"{!v='$foo'}",
"parsedquery":"+content:foo",
"parsedquery_toString":"+content:foo",
{code}

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14484) NPE in ConcurrentUpdateHttp2SolrClient MDC logging

2020-05-20 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112137#comment-17112137
 ] 

Lucene/Solr QA commented on SOLR-14484:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
43s{color} | {color:green} solrj in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}  8m 28s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-14484 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13003470/SOLR-14484-02.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 57b7d8a8dbf |
| ant | version: Apache Ant(TM) version 1.10.5 compiled on March 28 2019 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/751/testReport/ |
| modules | C: solr/solrj U: solr/solrj |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/751/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> NPE in ConcurrentUpdateHttp2SolrClient MDC logging
> --
>
> Key: SOLR-14484
> URL: https://issues.apache.org/jira/browse/SOLR-14484
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.4.1
>Reporter: Andras Salamon
>Priority: Minor
> Attachments: SOLR-14484-01.patch, SOLR-14484-02.patch
>
>
> {{client.getBaseURL()}} can be null in {{ConcurrentUpdateHttp2SolrClient}} 
> which can cause problems in MDC logging.
> We had the following error in the stacktrace. We were using Solr 8.4.1 from 
> lily hbase-indexer which still uses log4j 1.2:
> {noformat}
> Error from server at http://127.0.0.1:45895/solr/collection1: 
> java.lang.NullPointerException
>  at java.util.Hashtable.put(Hashtable.java:459)
>  at org.apache.log4j.MDC.put0(MDC.java:150)
>  at org.apache.log4j.MDC.put(MDC.java:85)
>  at org.slf4j.impl.Log4jMDCAdapter.put(Log4jMDCAdapter.java:67)
>  at org.slf4j.MDC.put(MDC.java:147)
>  at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.addRunner(ConcurrentUpdateHttp2SolrClient.java:346)
>  at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.waitForEmptyQueue(ConcurrentUpdateHttp2SolrClient.java:565)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14417) Gradle build sometimes fails RE BlockPoolSlice

2020-05-20 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112116#comment-17112116
 ] 

Dawid Weiss commented on SOLR-14417:


Let me know if you can somehow reproduce it from a clean state (git clean -xfd 
.).

> Gradle build sometimes fails RE BlockPoolSlice
> --
>
> Key: SOLR-14417
> URL: https://issues.apache.org/jira/browse/SOLR-14417
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: David Smiley
>Priority: Minor
>
> There seems to be some package visibility hacks around our Hdfs integration:
> {{/Users/dsmiley/SearchDev/lucene-solr/solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java:125:
>  error: BlockPoolSlice is not public in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl; cannot be accessed 
> from outside package}}
> {{List> modifiedHadoopClasses = Arrays.asList(BlockPoolSlice.class, 
> DiskChecker.class,}}
> This happens on my Gradle build when running {{gradlew testClasses}} (i.e. to 
> compile tests) but Ant proceeded without issue.  The work-around is to run 
> {{gradlew clean}} first but really I want our build to be smarter here.
> CC [~krisden]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14417) Gradle build sometimes fails RE BlockPoolSlice

2020-05-20 Thread Dawid Weiss (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss resolved SOLR-14417.

Resolution: Cannot Reproduce

> Gradle build sometimes fails RE BlockPoolSlice
> --
>
> Key: SOLR-14417
> URL: https://issues.apache.org/jira/browse/SOLR-14417
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: David Smiley
>Priority: Minor
>
> There seems to be some package visibility hacks around our Hdfs integration:
> {{/Users/dsmiley/SearchDev/lucene-solr/solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java:125:
>  error: BlockPoolSlice is not public in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl; cannot be accessed 
> from outside package}}
> {{List> modifiedHadoopClasses = Arrays.asList(BlockPoolSlice.class, 
> DiskChecker.class,}}
> This happens on my Gradle build when running {{gradlew testClasses}} (i.e. to 
> compile tests) but Ant proceeded without issue.  The work-around is to run 
> {{gradlew clean}} first but really I want our build to be smarter here.
> CC [~krisden]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14417) Gradle build sometimes fails RE BlockPoolSlice

2020-05-20 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112113#comment-17112113
 ] 

Dawid Weiss commented on SOLR-14417:


The class is public, its constructor is package-private. I can't reproduce this 
but it looks like a bug in javac somewhere rather than the build itself.

> Gradle build sometimes fails RE BlockPoolSlice
> --
>
> Key: SOLR-14417
> URL: https://issues.apache.org/jira/browse/SOLR-14417
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: David Smiley
>Priority: Minor
>
> There seems to be some package visibility hacks around our Hdfs integration:
> {{/Users/dsmiley/SearchDev/lucene-solr/solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java:125:
>  error: BlockPoolSlice is not public in 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl; cannot be accessed 
> from outside package}}
> {{List> modifiedHadoopClasses = Arrays.asList(BlockPoolSlice.class, 
> DiskChecker.class,}}
> This happens on my Gradle build when running {{gradlew testClasses}} (i.e. to 
> compile tests) but Ant proceeded without issue.  The work-around is to run 
> {{gradlew clean}} first but really I want our build to be smarter here.
> CC [~krisden]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14470) Add streaming expressions to /export handler

2020-05-20 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112099#comment-17112099
 ] 

Andrzej Bialecki commented on SOLR-14470:
-

For some reason Jira didn't add a link to the PR: 
[https://github.com/apache/lucene-solr/pull/1506]

The implementation simply reuses the streaming API to process documents just 
before they are sent out from /export, and it's purely optional - it's used 
only when {{expr=}} parameter is specified.

I had to do some restructuring of {{ExportWriter}} so the diff may seem large, 
but that was also to increase the reuse of already existing methods - the 
actual changes to ExportWriter that matter are just 20-some lines that hook-up 
the special streaming shim (ExportWriterStream).

> Add streaming expressions to /export handler
> 
>
> Key: SOLR-14470
> URL: https://issues.apache.org/jira/browse/SOLR-14470
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Export Writer, streaming expressions
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
>
> Many streaming scenarios would greatly benefit from the ability to perform 
> partial rollups (or other transformations) as early as possible, in order to 
> minimize the amount of data that has to be sent from shards to the 
> aggregating node.
> This can be implemented as a subset of streaming expressions that process the 
> data directly inside each local {{ExportHandler}} and outputs only the 
> records from the resulting stream. 
> Conceptually it would be similar to the way Hadoop {{Combiner}} works. As is 
> the case with {{Combiner}}, because the input data is processed in batches 
> there would be no guarantee that only 1 record per unique sort values would 
> be emitted - in fact, in most cases multiple partial aggregations would be 
> emitted. Still, in many scenarios this would allow reducing the amount of 
> data to be sent by several orders of magnitude.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #1504: SOLR-14462: cache more than one autoscaling session

2020-05-20 Thread GitBox


murblanc commented on a change in pull request #1504:
URL: https://github.com/apache/lucene-solr/pull/1504#discussion_r427930548



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/PolicyHelper.java
##
@@ -429,87 +440,124 @@ private void release(SessionWrapper sessionWrapper) {
  * The session can be used by others while the caller is performing 
operations
  */
 private void returnSession(SessionWrapper sessionWrapper) {
-  TimeSource timeSource = sessionWrapper.session != null ? 
sessionWrapper.session.cloudManager.getTimeSource() : TimeSource.NANO_TIME;
+  boolean present;
   synchronized (lockObj) {
 sessionWrapper.status = Status.EXECUTING;
-if (log.isDebugEnabled()) {
-  log.debug("returnSession, curr-time {} sessionWrapper.createTime {}, 
this.sessionWrapper.createTime {} "
-  , time(timeSource, MILLISECONDS),
-  sessionWrapper.createTime,
-  this.sessionWrapper.createTime);
-}
-if (sessionWrapper.createTime == this.sessionWrapper.createTime) {
-  //this session was used for computing new operations and this can 
now be used for other
-  // computing
-  this.sessionWrapper = sessionWrapper;
+present = sessionWrapperSet.contains(sessionWrapper);
 
-  //one thread who is waiting for this need to be notified.
-  lockObj.notify();
-} else {
-  log.debug("create time NOT SAME {} ", 
SessionWrapper.DEFAULT_INSTANCE.createTime);
-  //else just ignore it
-}
+// wake up single thread waiting for a session return (ok if not woken 
up, wait is short)
+lockObj.notify();
   }
 
+  // Logging
+  if (present) {
+if (log.isDebugEnabled()) {
+  log.debug("returnSession {}", sessionWrapper.getCreateTime());
+}
+  } else {
+log.warn("returning unknown session {} ", 
sessionWrapper.getCreateTime());
+  }
 }
 
 
-public SessionWrapper get(SolrCloudManager cloudManager) throws 
IOException, InterruptedException {
+public SessionWrapper get(SolrCloudManager cloudManager, boolean 
allowWait) throws IOException, InterruptedException {

Review comment:
   Thanks. I feel it makes the flow a bit harder to read and the savings 
are not huge so I prefer to stick to the original structure of this method.
   (the memory impact is negligible IMO. There's also an additional call to 
hasNonExpiredSession in the proposal but again no big deal)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on a change in pull request #1504: SOLR-14462: cache more than one autoscaling session

2020-05-20 Thread GitBox


murblanc commented on a change in pull request #1504:
URL: https://github.com/apache/lucene-solr/pull/1504#discussion_r427924889



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/PolicyHelper.java
##
@@ -429,87 +440,124 @@ private void release(SessionWrapper sessionWrapper) {
  * The session can be used by others while the caller is performing 
operations
  */
 private void returnSession(SessionWrapper sessionWrapper) {
-  TimeSource timeSource = sessionWrapper.session != null ? 
sessionWrapper.session.cloudManager.getTimeSource() : TimeSource.NANO_TIME;
+  boolean present;
   synchronized (lockObj) {
 sessionWrapper.status = Status.EXECUTING;
-if (log.isDebugEnabled()) {
-  log.debug("returnSession, curr-time {} sessionWrapper.createTime {}, 
this.sessionWrapper.createTime {} "
-  , time(timeSource, MILLISECONDS),
-  sessionWrapper.createTime,
-  this.sessionWrapper.createTime);
-}
-if (sessionWrapper.createTime == this.sessionWrapper.createTime) {
-  //this session was used for computing new operations and this can 
now be used for other
-  // computing
-  this.sessionWrapper = sessionWrapper;
+present = sessionWrapperSet.contains(sessionWrapper);
 
-  //one thread who is waiting for this need to be notified.
-  lockObj.notify();
-} else {
-  log.debug("create time NOT SAME {} ", 
SessionWrapper.DEFAULT_INSTANCE.createTime);
-  //else just ignore it
-}
+// wake up single thread waiting for a session return (ok if not woken 
up, wait is short)
+lockObj.notify();
   }
 
+  // Logging
+  if (present) {
+if (log.isDebugEnabled()) {
+  log.debug("returnSession {}", sessionWrapper.getCreateTime());
+}
+  } else {
+log.warn("returning unknown session {} ", 
sessionWrapper.getCreateTime());
+  }
 }
 
 
-public SessionWrapper get(SolrCloudManager cloudManager) throws 
IOException, InterruptedException {
+public SessionWrapper get(SolrCloudManager cloudManager, boolean 
allowWait) throws IOException, InterruptedException {
   TimeSource timeSource = cloudManager.getTimeSource();
+  long oldestUpdateTimeNs = 
TimeUnit.SECONDS.convert(timeSource.getTimeNs(), TimeUnit.NANOSECONDS) - 
SESSION_EXPIRY;
+  int zkVersion = 
cloudManager.getDistribStateManager().getAutoScalingConfig().getZkVersion();
+
   synchronized (lockObj) {
-if (sessionWrapper.status == Status.NULL ||
-sessionWrapper.zkVersion != 
cloudManager.getDistribStateManager().getAutoScalingConfig().getZkVersion() ||
-TimeUnit.SECONDS.convert(timeSource.getTimeNs() - 
sessionWrapper.lastUpdateTime, TimeUnit.NANOSECONDS) > SESSION_EXPIRY) {
-  //no session available or the session is expired
+// If nothing in the cache can possibly work, create a new session
+if (!hasNonExpiredSession(zkVersion, oldestUpdateTimeNs)) {
   return createSession(cloudManager);
-} else {
+}
+
+// Try to find a session available right away
+SessionWrapper sw = getAvailableSession(zkVersion, oldestUpdateTimeNs);
+
+if (sw != null) {
+  if (log.isDebugEnabled()) {
+log.debug("reusing session {}", sw.getCreateTime());
+  }
+  return sw;
+} else if (allowWait) {
+  // No session available, but if we wait a bit, maybe one can become 
available
+  // wait 1 to 10 secs in case a session is returned. Random to spread 
wakeup otherwise sessions not reused
+  long waitForMs = (long) (Math.random() * 9 * 1000 + 1000);
+
+  if (log.isDebugEnabled()) {
+log.debug("No sessions are available, all busy COMPUTING. starting 
wait of {}ms", waitForMs);
+  }
   long waitStart = time(timeSource, MILLISECONDS);
-  //the session is not expired
-  log.debug("reusing a session {}", this.sessionWrapper.createTime);
-  if (this.sessionWrapper.status == Status.UNUSED || 
this.sessionWrapper.status == Status.EXECUTING) {
-this.sessionWrapper.status = Status.COMPUTING;
-return sessionWrapper;
-  } else {
-//status= COMPUTING it's being used for computing. computing is
-if (log.isDebugEnabled()) {
-  log.debug("session being used. waiting... current time {} ", 
time(timeSource, MILLISECONDS));
-}
-try {
-  lockObj.wait(10 * 1000);//wait for a max of 10 seconds
-} catch (InterruptedException e) {
-  log.info("interrupted... ");
-}
+  try {
+lockObj.wait(waitForMs);
+  } catch (InterruptedException e) {
+Thread.currentThread().interrupt();
+  }
+
+  if 

[GitHub] [lucene-solr] murblanc commented on a change in pull request #1504: SOLR-14462: cache more than one autoscaling session

2020-05-20 Thread GitBox


murblanc commented on a change in pull request #1504:
URL: https://github.com/apache/lucene-solr/pull/1504#discussion_r427923626



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/PolicyHelper.java
##
@@ -429,87 +440,124 @@ private void release(SessionWrapper sessionWrapper) {
  * The session can be used by others while the caller is performing 
operations
  */
 private void returnSession(SessionWrapper sessionWrapper) {
-  TimeSource timeSource = sessionWrapper.session != null ? 
sessionWrapper.session.cloudManager.getTimeSource() : TimeSource.NANO_TIME;
+  boolean present;
   synchronized (lockObj) {
 sessionWrapper.status = Status.EXECUTING;
-if (log.isDebugEnabled()) {
-  log.debug("returnSession, curr-time {} sessionWrapper.createTime {}, 
this.sessionWrapper.createTime {} "
-  , time(timeSource, MILLISECONDS),
-  sessionWrapper.createTime,
-  this.sessionWrapper.createTime);
-}
-if (sessionWrapper.createTime == this.sessionWrapper.createTime) {
-  //this session was used for computing new operations and this can 
now be used for other
-  // computing
-  this.sessionWrapper = sessionWrapper;
+present = sessionWrapperSet.contains(sessionWrapper);
 
-  //one thread who is waiting for this need to be notified.
-  lockObj.notify();
-} else {
-  log.debug("create time NOT SAME {} ", 
SessionWrapper.DEFAULT_INSTANCE.createTime);
-  //else just ignore it
-}
+// wake up single thread waiting for a session return (ok if not woken 
up, wait is short)
+lockObj.notify();
   }
 
+  // Logging
+  if (present) {
+if (log.isDebugEnabled()) {
+  log.debug("returnSession {}", sessionWrapper.getCreateTime());
+}
+  } else {
+log.warn("returning unknown session {} ", 
sessionWrapper.getCreateTime());
+  }
 }
 
 
-public SessionWrapper get(SolrCloudManager cloudManager) throws 
IOException, InterruptedException {
+public SessionWrapper get(SolrCloudManager cloudManager, boolean 
allowWait) throws IOException, InterruptedException {
   TimeSource timeSource = cloudManager.getTimeSource();
+  long oldestUpdateTimeNs = 
TimeUnit.SECONDS.convert(timeSource.getTimeNs(), TimeUnit.NANOSECONDS) - 
SESSION_EXPIRY;
+  int zkVersion = 
cloudManager.getDistribStateManager().getAutoScalingConfig().getZkVersion();
+
   synchronized (lockObj) {
-if (sessionWrapper.status == Status.NULL ||
-sessionWrapper.zkVersion != 
cloudManager.getDistribStateManager().getAutoScalingConfig().getZkVersion() ||
-TimeUnit.SECONDS.convert(timeSource.getTimeNs() - 
sessionWrapper.lastUpdateTime, TimeUnit.NANOSECONDS) > SESSION_EXPIRY) {
-  //no session available or the session is expired
+// If nothing in the cache can possibly work, create a new session
+if (!hasNonExpiredSession(zkVersion, oldestUpdateTimeNs)) {
   return createSession(cloudManager);
-} else {
+}
+
+// Try to find a session available right away
+SessionWrapper sw = getAvailableSession(zkVersion, oldestUpdateTimeNs);
+
+if (sw != null) {
+  if (log.isDebugEnabled()) {
+log.debug("reusing session {}", sw.getCreateTime());
+  }
+  return sw;
+} else if (allowWait) {
+  // No session available, but if we wait a bit, maybe one can become 
available
+  // wait 1 to 10 secs in case a session is returned. Random to spread 
wakeup otherwise sessions not reused
+  long waitForMs = (long) (Math.random() * 9 * 1000 + 1000);
+
+  if (log.isDebugEnabled()) {
+log.debug("No sessions are available, all busy COMPUTING. starting 
wait of {}ms", waitForMs);
+  }
   long waitStart = time(timeSource, MILLISECONDS);
-  //the session is not expired
-  log.debug("reusing a session {}", this.sessionWrapper.createTime);
-  if (this.sessionWrapper.status == Status.UNUSED || 
this.sessionWrapper.status == Status.EXECUTING) {
-this.sessionWrapper.status = Status.COMPUTING;
-return sessionWrapper;
-  } else {
-//status= COMPUTING it's being used for computing. computing is
-if (log.isDebugEnabled()) {
-  log.debug("session being used. waiting... current time {} ", 
time(timeSource, MILLISECONDS));
-}
-try {
-  lockObj.wait(10 * 1000);//wait for a max of 10 seconds
-} catch (InterruptedException e) {
-  log.info("interrupted... ");
-}
+  try {
+lockObj.wait(waitForMs);
+  } catch (InterruptedException e) {
+Thread.currentThread().interrupt();
+  }
+
+  if 

[GitHub] [lucene-solr] murblanc commented on a change in pull request #1504: SOLR-14462: cache more than one autoscaling session

2020-05-20 Thread GitBox


murblanc commented on a change in pull request #1504:
URL: https://github.com/apache/lucene-solr/pull/1504#discussion_r427921563



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/PolicyHelper.java
##
@@ -429,87 +440,124 @@ private void release(SessionWrapper sessionWrapper) {
  * The session can be used by others while the caller is performing 
operations
  */
 private void returnSession(SessionWrapper sessionWrapper) {
-  TimeSource timeSource = sessionWrapper.session != null ? 
sessionWrapper.session.cloudManager.getTimeSource() : TimeSource.NANO_TIME;
+  boolean present;
   synchronized (lockObj) {
 sessionWrapper.status = Status.EXECUTING;
-if (log.isDebugEnabled()) {
-  log.debug("returnSession, curr-time {} sessionWrapper.createTime {}, 
this.sessionWrapper.createTime {} "
-  , time(timeSource, MILLISECONDS),
-  sessionWrapper.createTime,
-  this.sessionWrapper.createTime);
-}
-if (sessionWrapper.createTime == this.sessionWrapper.createTime) {
-  //this session was used for computing new operations and this can 
now be used for other
-  // computing
-  this.sessionWrapper = sessionWrapper;
+present = sessionWrapperSet.contains(sessionWrapper);
 
-  //one thread who is waiting for this need to be notified.
-  lockObj.notify();
-} else {
-  log.debug("create time NOT SAME {} ", 
SessionWrapper.DEFAULT_INSTANCE.createTime);
-  //else just ignore it
-}
+// wake up single thread waiting for a session return (ok if not woken 
up, wait is short)
+lockObj.notify();
   }
 
+  // Logging
+  if (present) {
+if (log.isDebugEnabled()) {
+  log.debug("returnSession {}", sessionWrapper.getCreateTime());
+}
+  } else {
+log.warn("returning unknown session {} ", 
sessionWrapper.getCreateTime());
+  }
 }
 
 
-public SessionWrapper get(SolrCloudManager cloudManager) throws 
IOException, InterruptedException {
+public SessionWrapper get(SolrCloudManager cloudManager, boolean 
allowWait) throws IOException, InterruptedException {
   TimeSource timeSource = cloudManager.getTimeSource();
+  long oldestUpdateTimeNs = 
TimeUnit.SECONDS.convert(timeSource.getTimeNs(), TimeUnit.NANOSECONDS) - 
SESSION_EXPIRY;
+  int zkVersion = 
cloudManager.getDistribStateManager().getAutoScalingConfig().getZkVersion();
+
   synchronized (lockObj) {
-if (sessionWrapper.status == Status.NULL ||
-sessionWrapper.zkVersion != 
cloudManager.getDistribStateManager().getAutoScalingConfig().getZkVersion() ||
-TimeUnit.SECONDS.convert(timeSource.getTimeNs() - 
sessionWrapper.lastUpdateTime, TimeUnit.NANOSECONDS) > SESSION_EXPIRY) {
-  //no session available or the session is expired
+// If nothing in the cache can possibly work, create a new session
+if (!hasNonExpiredSession(zkVersion, oldestUpdateTimeNs)) {
   return createSession(cloudManager);
-} else {
+}
+
+// Try to find a session available right away
+SessionWrapper sw = getAvailableSession(zkVersion, oldestUpdateTimeNs);
+
+if (sw != null) {
+  if (log.isDebugEnabled()) {
+log.debug("reusing session {}", sw.getCreateTime());
+  }
+  return sw;
+} else if (allowWait) {
+  // No session available, but if we wait a bit, maybe one can become 
available
+  // wait 1 to 10 secs in case a session is returned. Random to spread 
wakeup otherwise sessions not reused
+  long waitForMs = (long) (Math.random() * 9 * 1000 + 1000);
+
+  if (log.isDebugEnabled()) {
+log.debug("No sessions are available, all busy COMPUTING. starting 
wait of {}ms", waitForMs);
+  }
   long waitStart = time(timeSource, MILLISECONDS);
-  //the session is not expired
-  log.debug("reusing a session {}", this.sessionWrapper.createTime);
-  if (this.sessionWrapper.status == Status.UNUSED || 
this.sessionWrapper.status == Status.EXECUTING) {
-this.sessionWrapper.status = Status.COMPUTING;
-return sessionWrapper;
-  } else {
-//status= COMPUTING it's being used for computing. computing is
-if (log.isDebugEnabled()) {
-  log.debug("session being used. waiting... current time {} ", 
time(timeSource, MILLISECONDS));
-}
-try {
-  lockObj.wait(10 * 1000);//wait for a max of 10 seconds
-} catch (InterruptedException e) {
-  log.info("interrupted... ");
-}
+  try {
+lockObj.wait(waitForMs);
+  } catch (InterruptedException e) {
+Thread.currentThread().interrupt();
+  }
+
+  if 

[GitHub] [lucene-solr] murblanc commented on a change in pull request #1504: SOLR-14462: cache more than one autoscaling session

2020-05-20 Thread GitBox


murblanc commented on a change in pull request #1504:
URL: https://github.com/apache/lucene-solr/pull/1504#discussion_r427919669



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/PolicyHelper.java
##
@@ -382,45 +383,55 @@ static MapWriter loggingInfo(Policy policy, 
SolrCloudManager cloudManager, Sugge
   }
 
   public enum Status {
-NULL,
-//it is just created and not yet used or all operations on it has been 
completed fully
-UNUSED,
-COMPUTING, EXECUTING
+COMPUTING, // A command is actively using and modifying the session to 
compute placements
+EXECUTING // A command is not done yet processing its changes but no 
longer uses the session
   }
 
   /**
-   * This class stores a session for sharing purpose. If a process creates a 
session to
-   * compute operations,
-   * 1) see if there is a session that is available in the cache,
-   * 2) if yes, check if it is expired
-   * 3) if it is expired, create a new session
-   * 4) if it is not expired, borrow it
-   * 5) after computing operations put it back in the cache
+   * This class stores sessions for sharing purposes. If a process requirees a 
session to

Review comment:
   Thanks. I have the MacBook Pro butterfly keyboard, it's a catastrophe!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112002#comment-17112002
 ] 

Cao Manh Dat edited comment on SOLR-14419 at 5/20/20, 9:58 AM:
---

 
{quote}recursive dependency
{quote}
My point here is the paramValue here is just a String ("type:parent"), It will 
be nice if paramValue is a JsonObject, then recurisve dependency will be a 
problem.
{quote}_feature is kinda limited_ I see no limits so far.
{quote}
I mean I don't see many usecase this feature will be useful?

Right, the $ will be a problem if the query start with $. Then how the 
tranditional local params solve that problem?


was (Author: caomanhdat):
 
{quote}recursive dependency
{quote}
My point here is the paramValue here is just a String ("type:parent"), It will 
be nice if paramValue is a JsonObject, then recurisve dependency will be a 
problem.
{quote}_feature is kinda limited_ I see no limits so far.
{quote}
I mean I don't see many usecase this feature will be useful?

Right, the $ will be a problem if the query start with $. 

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112002#comment-17112002
 ] 

Cao Manh Dat edited comment on SOLR-14419 at 5/20/20, 9:57 AM:
---

 
{quote}recursive dependency
{quote}
My point here is the paramValue here is just a String ("type:parent"), It will 
be nice if paramValue is a JsonObject, then recurisve dependency will be a 
problem.
{quote}_feature is kinda limited_ I see no limits so far.
{quote}
I mean I don't see many usecase this feature will be useful?

Right, the $ will be a problem if the query start with $. 


was (Author: caomanhdat):
 
{quote}recursive dependency
{quote}
My point here is the paramValue here is just a String ("type:parent"), It will 
be nice if paramValue is a JsonObject, then recurisve dependency will be a 
problem.
{quote}_feature is kinda limited_ I see no limits so far.
{quote}
I mean I don't see many usecase this feature will be useful?

 

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17112002#comment-17112002
 ] 

Cao Manh Dat commented on SOLR-14419:
-

 
{quote}recursive dependency
{quote}
My point here is the paramValue here is just a String ("type:parent"), It will 
be nice if paramValue is a JsonObject, then recurisve dependency will be a 
problem.
{quote}_feature is kinda limited_ I see no limits so far.
{quote}
I mean I don't see many usecase this feature will be useful?

 

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14502) increase bin/solr's post kill sleep

2020-05-20 Thread Christine Poerschke (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-14502:
---
Attachment: SOLR-14502.patch

> increase bin/solr's post kill sleep
> ---
>
> Key: SOLR-14502
> URL: https://issues.apache.org/jira/browse/SOLR-14502
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-14502.patch
>
>
> Currently e.g. 
> https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.5.1/solr/bin/solr#L863
>  we wait for one second after the {{kill -9}} before re-checking if the 
> process still exists.
> We've seen a few cases where the {{kill -9}} succeeded but only slightly 
> after the one second interval. So this ticket here proposes to increase the 
> interval from 1s to (say) 10s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14502) increase bin/solr's post kill sleep

2020-05-20 Thread Christine Poerschke (Jira)
Christine Poerschke created SOLR-14502:
--

 Summary: increase bin/solr's post kill sleep
 Key: SOLR-14502
 URL: https://issues.apache.org/jira/browse/SOLR-14502
 Project: Solr
  Issue Type: Task
Reporter: Christine Poerschke
Assignee: Christine Poerschke


Currently e.g. 
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.5.1/solr/bin/solr#L863
 we wait for one second after the {{kill -9}} before re-checking if the process 
still exists.

We've seen a few cases where the {{kill -9}} succeeded but only slightly after 
the one second interval. So this ticket here proposes to increase the interval 
from 1s to (say) 10s.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14442) bin/solr to attempt jstack before killing hung Solr instance

2020-05-20 Thread Christine Poerschke (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14442?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111935#comment-17111935
 ] 

Christine Poerschke commented on SOLR-14442:


Thanks [~mkhl] for taking a look at {{solr.cmd}} here!

bq. ... it seems that current code doesn't stop process ...

Wow, that's an interesting and unexpected side effect find.

In terms of a {{solr/CHANGES.txt}} entry for this ticket I'd been undecided and 
wondered if it could be omitted perhaps, jstack being implementation detail to 
an extent and also not something the typical user would be likely to see, 
hopefully. A fix for {{solr.cmd}} not stopping as expected might change that 
though, thoughts?

> bin/solr to attempt jstack before killing hung Solr instance
> 
>
> Key: SOLR-14442
> URL: https://issues.apache.org/jira/browse/SOLR-14442
> Project: Solr
>  Issue Type: Improvement
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-14442.patch, SOLR-14442.patch, SOLR-14442.patch, 
> screenshot-1.png
>
>
> If a Solr instance did not respond to the 'stop' command in a timely manner 
> then the {{bin/solr}} script will attempt to forcefully kill it: 
> [https://github.com/apache/lucene-solr/blob/releases/lucene-solr/8.5.1/solr/bin/solr#L859]
> Gathering of information (e.g. a jstack of the java process) before the kill 
> command may be helpful in determining why the instance did not stop as 
> expected.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9360) might be NEEDED. ToParentDocValues uses advanceExact() of underneath DocValues

2020-05-20 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111934#comment-17111934
 ] 

Mikhail Khludnev commented on LUCENE-9360:
--

bq.  what the problem is with calling advance() under the hood, can you explain
[~jpountz], it turns out like this: let's we have parents docnums:10,20,30. 
While scoring 10th block child filter might jump to behind 20th block due to 
absent child docs. And then {{ToParentDocValues.advanceExact(20)}} will drag 
child field doc values back. It never happens in {{/master}} nor {{_8x}} now, 
but attempting to reuse docVals across Sorting Group Heads lead to this 
trouble.  
LUCENE-9328 provides a fix for it, but it a little bit fragile. Really 
appreciate your feedback. Thanks.


> might be NEEDED. ToParentDocValues uses advanceExact() of underneath DocValues
> --
>
> Key: LUCENE-9360
> URL: https://issues.apache.org/jira/browse/LUCENE-9360
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Mikhail Khludnev
>Priority: Major
>
> Currently {{ToParentDocvalues.advanceExact()}} propagates it to 
> {{DocValues.advance()}} as advised at LUCENE-7871. It causes some problem at 
> LUCENE-9328 and seems not really reasonable. The later jira has patch 
> attached which resolves this. The questions is why(not)?
> cc [~jpountz]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111927#comment-17111927
 ] 

Mikhail Khludnev commented on SOLR-14419:
-

[~caomanhdat], thanks for your reply. 
1. it's verbose, that's why it's necessary for advanced usecase and let to 
avoid a lot of repetition. 
2. _Can it be $paramName_ that's what I started from, but how to distinguish 
between string query starting from {{$}} and this $ref. Someone might search 
for $$.
3. _recursive dependency,_ this code just puts params refss {{{!v=$ref}}} and 
these refs would be resolved by {{QParser}}, which already has 
{{checkRecursive}}. Do you think it's worth to add explicit test? 
4. _feature is kinda limited_ I see no limits so far.
5.off-top:  _we already have tags_ I feel regret for brought this #microsyntax, 
now I think it's redundant and we would be just fine with explicit 
{{"tags":"foo,bar"}} properties. Sad, but that's it. 

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-9818) Solr admin UI rapidly retries any request(s) if it loses connection with the server

2020-05-20 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111893#comment-17111893
 ] 

Jan Høydahl commented on SOLR-9818:
---

Can we please fully get rid of the retry buffer? Instead just display the 
errors and not retry the original request? Will this patch [^SOLR-9818.patch] 
work?

> Solr admin UI rapidly retries any request(s) if it loses connection with the 
> server
> ---
>
> Key: SOLR-9818
> URL: https://issues.apache.org/jira/browse/SOLR-9818
> Project: Solr
>  Issue Type: Bug
>  Components: Admin UI
>Affects Versions: 6.3
>Reporter: Ere Maijala
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-9818.patch
>
>
> It seems that whenever the Solr admin UI loses connection with the server, be 
> the reason that the server is too slow to answer or that it's gone away 
> completely, it starts hammering the server with the previous request until it 
> gets a success response, it seems. That can be especially bad if the last 
> attempted action was something like collection reload with a SolrCloud 
> instance. The admin UI will quickly add hundreds of reload commands to 
> overseer/collection-queue-work, which may essentially cause the replicas to 
> get overloaded when they're trying to handle all the reload commands.
> I believe the UI should never retry the previous command blindly when the 
> connection is lost, but instead just ping the server until it responds again.
> Steps to reproduce:
> 1.) Fire up Solr
> 2.) Open the admin UI in browser
> 3.) Open a web console in the browser to see the requests it sends
> 4.) Stop solr
> 5.) Try an action in the admin UI
> 6.) Observe the web console in browser quickly fill up with repeats of the 
> originally attempted request



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-9818) Solr admin UI rapidly retries any request(s) if it loses connection with the server

2020-05-20 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-9818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9818:
--
Attachment: SOLR-9818.patch

> Solr admin UI rapidly retries any request(s) if it loses connection with the 
> server
> ---
>
> Key: SOLR-9818
> URL: https://issues.apache.org/jira/browse/SOLR-9818
> Project: Solr
>  Issue Type: Bug
>  Components: Admin UI
>Affects Versions: 6.3
>Reporter: Ere Maijala
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-9818.patch
>
>
> It seems that whenever the Solr admin UI loses connection with the server, be 
> the reason that the server is too slow to answer or that it's gone away 
> completely, it starts hammering the server with the previous request until it 
> gets a success response, it seems. That can be especially bad if the last 
> attempted action was something like collection reload with a SolrCloud 
> instance. The admin UI will quickly add hundreds of reload commands to 
> overseer/collection-queue-work, which may essentially cause the replicas to 
> get overloaded when they're trying to handle all the reload commands.
> I believe the UI should never retry the previous command blindly when the 
> connection is lost, but instead just ping the server until it responds again.
> Steps to reproduce:
> 1.) Fire up Solr
> 2.) Open the admin UI in browser
> 3.) Open a web console in the browser to see the requests it sends
> 4.) Stop solr
> 5.) Try an action in the admin UI
> 6.) Observe the web console in browser quickly fill up with repeats of the 
> originally attempted request



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111882#comment-17111882
 ] 

Cao Manh Dat edited comment on SOLR-14419 at 5/20/20, 7:53 AM:
---

It seems {{{'param': 'paramName'}}} too verbose and vague at the same time? Can 
it be {{$paramName}} only (I don't like special character, but we already have 
tags).

It seems like paramValue can only be a String. If we support paramValue as a 
Json object, it may leads to recursive dependency, i.e: paramA -> paramB -> 
paramA -> etc

So the application of this feature is kinda limited, is it?


was (Author: caomanhdat):
It seems \{'param': 'paramName'} too verbose and vague at the same time? Can it 
be {{$paramName}} only (I don't like special character, but we already have 
tags).


 It seems like paramValue can only be a String. If we support paramValue as a 
Json object, it may leads to recursive dependency, i.e: paramA -> paramB -> 
paramA -> ...

So the application of this feature is kinda limited, is it?

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111882#comment-17111882
 ] 

Cao Manh Dat edited comment on SOLR-14419 at 5/20/20, 7:53 AM:
---

It seems \{'param': 'paramName'} too verbose and vague at the same time? Can it 
be {{$paramName}} only (I don't like special character, but we already have 
tags).

It seems like paramValue can only be a String. If we support paramValue as a 
Json object, it may leads to recursive dependency, i.e: paramA -> paramB -> 
paramA -> etc

So the application of this feature is kinda limited, is it?


was (Author: caomanhdat):
It seems {{{'param': 'paramName'}}} too verbose and vague at the same time? Can 
it be {{$paramName}} only (I don't like special character, but we already have 
tags).

It seems like paramValue can only be a String. If we support paramValue as a 
Json object, it may leads to recursive dependency, i.e: paramA -> paramB -> 
paramA -> etc

So the application of this feature is kinda limited, is it?

> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111882#comment-17111882
 ] 

Cao Manh Dat commented on SOLR-14419:
-

It seems {'param': 'paramName'} too verbose and vague at the same time? Can it 
be {{$paramName}} only (I don't like special character, but we already have 
tags).
It seems like paramValue can only be a String. If we support paramValue as a 
Json object, it may leads to recursive dependency, i.e: paramA -> paramB -> 
paramA -> ...
So the application of this feature is kinda limited, is it?


> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14419) Query DLS {"param":"ref"}

2020-05-20 Thread Cao Manh Dat (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111882#comment-17111882
 ] 

Cao Manh Dat edited comment on SOLR-14419 at 5/20/20, 7:52 AM:
---

It seems \{'param': 'paramName'} too verbose and vague at the same time? Can it 
be {{$paramName}} only (I don't like special character, but we already have 
tags).


 It seems like paramValue can only be a String. If we support paramValue as a 
Json object, it may leads to recursive dependency, i.e: paramA -> paramB -> 
paramA -> ...

So the application of this feature is kinda limited, is it?


was (Author: caomanhdat):
It seems {'param': 'paramName'} too verbose and vague at the same time? Can it 
be {{$paramName}} only (I don't like special character, but we already have 
tags).
It seems like paramValue can only be a String. If we support paramValue as a 
Json object, it may leads to recursive dependency, i.e: paramA -> paramB -> 
paramA -> ...
So the application of this feature is kinda limited, is it?


> Query DLS {"param":"ref"}
> -
>
> Key: SOLR-14419
> URL: https://issues.apache.org/jira/browse/SOLR-14419
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: JSON Request API
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-14419.patch, SOLR-14419.patch, SOLR-14419.patch
>
>
> What we can do with plain params: 
> {{q=\{!parent which=$prnts}...=type:parent}}
> obviously I want to have something like this in Query DSL:
> {code}
> { "query": { "parents":{ "which":{"param":"prnts"}, "query":"..."}}
>   "params": {
>   "prnts":"type:parent"
>}
> }
> {code} 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14484) NPE in ConcurrentUpdateHttp2SolrClient MDC logging

2020-05-20 Thread Andras Salamon (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111826#comment-17111826
 ] 

Andras Salamon commented on SOLR-14484:
---

Yes, that's even simpler. Uploaded a new patch.

> NPE in ConcurrentUpdateHttp2SolrClient MDC logging
> --
>
> Key: SOLR-14484
> URL: https://issues.apache.org/jira/browse/SOLR-14484
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.4.1
>Reporter: Andras Salamon
>Priority: Minor
> Attachments: SOLR-14484-01.patch, SOLR-14484-02.patch
>
>
> {{client.getBaseURL()}} can be null in {{ConcurrentUpdateHttp2SolrClient}} 
> which can cause problems in MDC logging.
> We had the following error in the stacktrace. We were using Solr 8.4.1 from 
> lily hbase-indexer which still uses log4j 1.2:
> {noformat}
> Error from server at http://127.0.0.1:45895/solr/collection1: 
> java.lang.NullPointerException
>  at java.util.Hashtable.put(Hashtable.java:459)
>  at org.apache.log4j.MDC.put0(MDC.java:150)
>  at org.apache.log4j.MDC.put(MDC.java:85)
>  at org.slf4j.impl.Log4jMDCAdapter.put(Log4jMDCAdapter.java:67)
>  at org.slf4j.MDC.put(MDC.java:147)
>  at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.addRunner(ConcurrentUpdateHttp2SolrClient.java:346)
>  at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.waitForEmptyQueue(ConcurrentUpdateHttp2SolrClient.java:565)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14484) NPE in ConcurrentUpdateHttp2SolrClient MDC logging

2020-05-20 Thread Andras Salamon (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Salamon updated SOLR-14484:
--
Attachment: SOLR-14484-02.patch

> NPE in ConcurrentUpdateHttp2SolrClient MDC logging
> --
>
> Key: SOLR-14484
> URL: https://issues.apache.org/jira/browse/SOLR-14484
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 8.4.1
>Reporter: Andras Salamon
>Priority: Minor
> Attachments: SOLR-14484-01.patch, SOLR-14484-02.patch
>
>
> {{client.getBaseURL()}} can be null in {{ConcurrentUpdateHttp2SolrClient}} 
> which can cause problems in MDC logging.
> We had the following error in the stacktrace. We were using Solr 8.4.1 from 
> lily hbase-indexer which still uses log4j 1.2:
> {noformat}
> Error from server at http://127.0.0.1:45895/solr/collection1: 
> java.lang.NullPointerException
>  at java.util.Hashtable.put(Hashtable.java:459)
>  at org.apache.log4j.MDC.put0(MDC.java:150)
>  at org.apache.log4j.MDC.put(MDC.java:85)
>  at org.slf4j.impl.Log4jMDCAdapter.put(Log4jMDCAdapter.java:67)
>  at org.slf4j.MDC.put(MDC.java:147)
>  at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.addRunner(ConcurrentUpdateHttp2SolrClient.java:346)
>  at 
> org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient.waitForEmptyQueue(ConcurrentUpdateHttp2SolrClient.java:565)
>  {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org