[jira] [Commented] (SOLR-15154) Let Http2SolrClient pass Basic Auth credentials to all requests

2021-03-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17297104#comment-17297104
 ] 

ASF subversion and git services commented on SOLR-15154:


Commit 1b628d3687d003202730b6c3ce084091f20cc670 in lucene-solr's branch 
refs/heads/branch_8x from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1b628d3 ]

SOLR-15154: Fix Bad merge


> Let Http2SolrClient pass Basic Auth credentials to all requests
> ---
>
> Key: SOLR-15154
> URL: https://issues.apache.org/jira/browse/SOLR-15154
> Project: Solr
>  Issue Type: Improvement
>  Components: SolrJ
>Reporter: Tomas Eduardo Fernandez Lobbe
>Priority: Minor
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> In {{HttpSolrClient}}, one could specify credentials [at the JVM 
> level|https://lucene.apache.org/solr/guide/8_8/basic-authentication-plugin.html#global-jvm-basic-auth-credentials],
>  and that would make all requests to Solr have them. This doesn't work with 
> the Http2 clients case and I think it's very useful. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-15193) Add Graph to the Visual Guide to Streaming Expressions and Math Expressions

2021-03-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17297026#comment-17297026
 ] 

ASF subversion and git services commented on SOLR-15193:


Commit 1911c55897d1e5c2d916ebb4f4c00c284ec6bd18 in lucene-solr's branch 
refs/heads/branch_8x from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1911c55 ]

SOLR-15193: Fix typo


> Add Graph to the Visual Guide to Streaming Expressions and Math Expressions
> ---
>
> Key: SOLR-15193
> URL: https://issues.apache.org/jira/browse/SOLR-15193
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 8.9
>Reporter: Joel Bernstein
>Priority: Major
> Fix For: 8.9
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-15193) Add Graph to the Visual Guide to Streaming Expressions and Math Expressions

2021-03-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17297025#comment-17297025
 ] 

ASF subversion and git services commented on SOLR-15193:


Commit e9ddaaca51661ddae471246b798afd80ebc7eef0 in lucene-solr's branch 
refs/heads/master from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e9ddaac ]

SOLR-15193: Fix typo


> Add Graph to the Visual Guide to Streaming Expressions and Math Expressions
> ---
>
> Key: SOLR-15193
> URL: https://issues.apache.org/jira/browse/SOLR-15193
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 8.9
>Reporter: Joel Bernstein
>Priority: Major
> Fix For: 8.9
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-15193) Add Graph to the Visual Guide to Streaming Expressions and Math Expressions

2021-03-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17297024#comment-17297024
 ] 

ASF subversion and git services commented on SOLR-15193:


Commit c18c91965ddf4262f4f5ba7bd27615efa44f8469 in lucene-solr's branch 
refs/heads/branch_8x from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=c18c919 ]

SOLR-15193: Improve maxDocFreq docs


> Add Graph to the Visual Guide to Streaming Expressions and Math Expressions
> ---
>
> Key: SOLR-15193
> URL: https://issues.apache.org/jira/browse/SOLR-15193
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 8.9
>Reporter: Joel Bernstein
>Priority: Major
> Fix For: 8.9
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-15193) Add Graph to the Visual Guide to Streaming Expressions and Math Expressions

2021-03-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17297023#comment-17297023
 ] 

ASF subversion and git services commented on SOLR-15193:


Commit 140c37eb0f0024ce6f9defa6b32351f6417074f4 in lucene-solr's branch 
refs/heads/master from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=140c37e ]

SOLR-15193: Improve maxDocFreq docs


> Add Graph to the Visual Guide to Streaming Expressions and Math Expressions
> ---
>
> Key: SOLR-15193
> URL: https://issues.apache.org/jira/browse/SOLR-15193
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 8.9
>Reporter: Joel Bernstein
>Priority: Major
> Fix For: 8.9
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-15158) Error when upgrading from Solr 8.0.0 to 8.5.2

2021-03-07 Thread Marcus Eagan (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17297016#comment-17297016
 ] 

Marcus Eagan commented on SOLR-15158:
-

Hi [~amitguptag] I suspect this issue could be caused by a variety of factors. 
One that comes to mind immediately is improper formatting of a config file, 
likely JSON because of the comma and AUTHORIZATION being called out in the 
error message.

Can you double check the integrity of your security.json file? Ensure it is 
formatted appropriately. 

Furthermore, there were a few significant changes from 8 to 8.5. I would start 
there, though.

> Error when upgrading from Solr 8.0.0 to 8.5.2
> -
>
> Key: SOLR-15158
> URL: https://issues.apache.org/jira/browse/SOLR-15158
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 8.5.2
>Reporter: Amit Gupta
>Priority: Blocker
>  Labels: 8.5.2, solr, upgrade
>
> I am trying to upgrade our tech stack from 8.0.0 to 8.5.2. The cofiguration 
> work until the first time the ersver is rebooted. Then solr stops working 
> with below error. The exact same config works with solr 8.0.0 without issues.
> Extract  from solr.log:
> 021-02-15 19:04:52.922 INFO (main) [ ] o.a.s.c.ZkContainer Zookeeper 
> client=10.201.52.56:2181,10.201.52.169:2181,10.201.52.136:2181
> 2021-02-15 19:04:52.940 INFO (main) [ ] o.a.s.c.c.ConnectionManager Waiting 
> for client to connect to ZooKeeper
> 2021-02-15 19:04:52.949 INFO (zkConnectionManagerCallback-9-thread-1) [ ] 
> o.a.s.c.c.ConnectionManager zkClient has connected
> 2021-02-15 19:04:52.949 INFO (main) [ ] o.a.s.c.c.ConnectionManager Client is 
> connected to ZooKeeper
> 2021-02-15 19:04:53.047 ERROR (main) [ ] o.a.s.s.SolrDispatchFilter Could not 
> start Solr. Check solr/home property and the logs
> 2021-02-15 19:04:53.077 ERROR (main) [ ] o.a.s.c.SolrCore 
> null:org.noggit.JSONParser$ParseException: Unexpected comma: 
> char=,,position=324 AFTER='QuEFwT7s= 
> nlE/oRd/BIthqBP8UY1yFiu6Betb7xDoyTPFD3AxaUo="}} },' BEFORE=' 
> "authorization":{ "class":"solr.R'
>  at org.noggit.JSONParser.err(JSONParser.java:452)
>  at org.noggit.JSONParser.next(JSONParser.java:1013)
>  at org.noggit.JSONParser.nextEvent(JSONParser.java:1073)
>  at org.noggit.ObjectBuilder.checkEOF(ObjectBuilder.java:52)
>  at org.noggit.ObjectBuilder.getValStrict(ObjectBuilder.java:76)
>  at org.apache.solr.common.util.Utils.fromJSON(Utils.java:270)
>  at org.apache.solr.common.util.Utils.fromJSON(Utils.java:256)
>  at 
> org.apache.solr.common.cloud.ZkStateReader.getSecurityProps(ZkStateReader.java:1274)
>  at 
> org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:527)
>  at org.apache.solr.cloud.ZkController.init(ZkController.java:902)
>  at org.apache.solr.cloud.ZkController.(ZkController.java:472)
>  at org.apache.solr.core.ZkContainer.initZooKeeper(ZkContainer.java:115)
>  at org.apache.solr.core.CoreContainer.load(CoreContainer.java:663)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.createCoreContainer(SolrDispatchFilter.java:263)
>  at 
> org.apache.solr.servlet.SolrDispatchFilter.init(SolrDispatchFilter.java:183)
>  at org.eclipse.jetty.servlet.FilterHolder.initialize(FilterHolder.java:134)
>  at 
> org.eclipse.jetty.servlet.ServletHandler.lambda$initialize$0(ServletHandler.java:751)
>  at 
> java.base/java.util.Spliterators$ArraySpliterator.forEachRemaining(Spliterators.java:948)
>  at 
> java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734)
>  at 
> java.base/java.util.stream.Streams$ConcatSpliterator.forEachRemaining(Streams.java:734)
>  at 
> java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
>  at 
> org.eclipse.jetty.servlet.ServletHandler.initialize(ServletHandler.java:744)
>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:360)
>  at 
> org.eclipse.jetty.webapp.WebAppContext.startWebapp(WebAppContext.java:1445)
>  at 
> org.eclipse.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1409)
>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:822)
>  at 
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
>  at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:524)
>  at 
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
>  at 
> org.eclipse.jetty.deploy.bindings.StandardStarter.processBinding(StandardStarter.java:46)
>  at org.eclipse.jetty.deploy.AppLifeCycle.runBindings(AppLifeCycle.java:188)
>  at 
> org.eclipse.jetty.deploy.DeploymentManager.requestAppGoal(DeploymentManager.java:513)
>  at 
> org.eclipse.jetty.deploy.DeploymentManager.addApp(Deplo

[GitHub] [lucene-solr] mikemccand commented on a change in pull request #2459: LUCENE-9825: Hunspell: reverse the "words" trie for faster word lookup/suggestions

2021-03-07 Thread GitBox


mikemccand commented on a change in pull request #2459:
URL: https://github.com/apache/lucene-solr/pull/2459#discussion_r589079453



##
File path: 
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/WordStorage.java
##
@@ -0,0 +1,338 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.lucene.analysis.hunspell;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.function.BiConsumer;
+import org.apache.lucene.store.ByteArrayDataInput;
+import org.apache.lucene.store.ByteArrayDataOutput;
+import org.apache.lucene.store.DataOutput;
+import org.apache.lucene.util.ArrayUtil;
+import org.apache.lucene.util.CharsRef;
+import org.apache.lucene.util.IntsRef;
+import org.apache.lucene.util.IntsRefBuilder;
+import org.apache.lucene.util.fst.IntSequenceOutputs;
+
+/**
+ * A data structure for memory-efficient word storage and fast 
lookup/enumeration. Each dictionary
+ * entry is stored as:
+ *
+ * 
+ *   the last character
+ *   pointer to a similar entry for the prefix (all characters except the 
last one)
+ *   value data: a list of ints representing word flags and morphological 
data, and a pointer to
+ *   hash collisions, if any
+ * 
+ *
+ * There's only one entry for each prefix, so it's like a trie/{@link
+ * org.apache.lucene.util.fst.FST}, but a reversed one: each nodes points to a 
single previous nodes
+ * instead of several following ones. For example, "abc" and "abd" point to 
the same prefix entry
+ * "ab" which points to "a" which points to 0.
+ * 
+ * The entries are stored in a contiguous byte array, identified by their 
offsets, using {@link
+ * DataOutput#writeVInt} ()} VINT} format for compression.
+ */
+class WordStorage {
+  /**
+   * A map from word's hash (modulo array's length) into the offset of the 
last entry in {@link
+   * #wordData} with this hash. Negated, if there's more than one entry with 
the same hash.
+   */
+  private final int[] hashTable;
+
+  /**
+   * An array of word entries:
+   *
+   * 
+   *   VINT: the word's last character
+   *   VINT: pointer to the entry for the same word without the last 
character. It's relative:
+   *   the difference of this entry's start and the prefix's entry start. 
0 for single-character
+   *   entries
+   *   Optional, for non-leaf entries only:
+   *   
+   * VINT: the length of the word form data, returned from {@link 
#lookupWord}
+   * n * VINT: the word form data
+   * Optional, for hash-colliding entries only:
+   * 
+   *   BYTE: 1 if the next collision entry has further 
collisions, 0 if it's the
+   *   last of the entries with the same hash
+   *   VINT: (relative) pointer to the previous entry with the 
same hash
+   * 
+   *   
+   * 
+   */
+  private final byte[] wordData;
+
+  private WordStorage(int[] hashTable, byte[] wordData) {
+this.hashTable = hashTable;
+this.wordData = wordData;
+  }
+
+  IntsRef lookupWord(char[] word, int offset, int length) {
+assert length > 0;
+
+int hash = Math.abs(CharsRef.stringHashCode(word, offset, length) % 
hashTable.length);
+int pos = hashTable[hash];
+if (pos == 0) {
+  return null;
+}
+
+boolean collision = pos < 0;
+pos = Math.abs(pos);
+
+char lastChar = word[offset + length - 1];
+ByteArrayDataInput in = new ByteArrayDataInput(wordData);
+while (true) {
+  in.setPosition(pos);
+  char c = (char) in.readVInt();
+  int prevPos = pos - in.readVInt();

Review comment:
   You might want to fail-fast(er) here by checking `if (c != lastChar && 
!collission) { return null; }`?  Might give small speedup for words not in the 
dictionary, though, that is likely the rare case (most lookups exist, or, share 
a suffix that exists)?

##
File path: 
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/WordStorage.java
##
@@ -0,0 +1,338 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding co

[jira] [Commented] (SOLR-15215) SolrJ: Remove needless Netty dependency

2021-03-07 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17296971#comment-17296971
 ] 

Mike Drob commented on SOLR-15215:
--

Let’s start creating those dependencies at the same time that we remove them 
from solrj, and also a solrj-all dependency that people can use for minimal 
changes from their perspective

> SolrJ: Remove needless Netty dependency
> ---
>
> Key: SOLR-15215
> URL: https://issues.apache.org/jira/browse/SOLR-15215
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: David Smiley
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> SolrJ depends on Netty transitively via ZooKeeper.  But ZooKeeper's Netty 
> dependency should be considered optional -- you have to opt-in.
> BTW it's only needed in Solr-core because of Hadoop/HDFS which ought to move 
> to a contrib and take this dependency with it over there.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-15215) SolrJ: Remove needless Netty dependency

2021-03-07 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17296970#comment-17296970
 ] 

David Smiley commented on SOLR-15215:
-

bq. would like to have the ZK netty option still available

To be clear, it will still be *available*.  It's a opt-in vs opt-out matter.  
ZK made the choice of having to opt-in but it made the dependency a normal 
dependency (you get it even if you don't opt-in).  In SolrJ, the proposal I 
make here is that clients wanting to choose Netty need to add it to their 
classpath themselves, in addition to set the system property that ZK uses to 
opt-in and/or configure SSL.

I would prefer that the base SolrJ dependency not have ZooKeeper either; we'd 
have another "solrj-zk" which would include ZK and maybe/maybe-not Netty.
And a "solrj-expressions" to hold the streaming expressions code + commons-math 
dependency, which are non-trivial.  Until any of this happens, we still only 
have one "solrj" which has too many dependencies.

> SolrJ: Remove needless Netty dependency
> ---
>
> Key: SOLR-15215
> URL: https://issues.apache.org/jira/browse/SOLR-15215
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: David Smiley
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> SolrJ depends on Netty transitively via ZooKeeper.  But ZooKeeper's Netty 
> dependency should be considered optional -- you have to opt-in.
> BTW it's only needed in Solr-core because of Hadoop/HDFS which ought to move 
> to a contrib and take this dependency with it over there.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14166) Use TwoPhaseIterator for non-cached filter queries

2021-03-07 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17296964#comment-17296964
 ] 

David Smiley commented on SOLR-14166:
-

CC [~yonik] [~jbernste] [~hossman] as possible reviewers for this attached PR 
which is rather technical into code which few people have touched but you all 
three in some shape/form.  Please review the issue description, and take a look 
at the PR.  In the PR, each commit is well isolated to the what the commit 
message says, so you may prefer to go commit-by-commit, or you could just look 
at the thing as a whole.  In a comment above I pondered "Maybe we could make a 
wrapping query that wraps the underlying TPI.matchCost"; as you'll see in the 
PR, I did that.  The test works in validating that match() isn't called more 
than it needs to be.  It used to be called more which is verifiable by copying 
the test to the 8x line (if I recall, it was called two additional times).  I 
suspect the test doesn't test that MatchCostQuery is having an effect... I may 
need to think a bit more on how to do that.

I suspect someone will ask me if I did some performance tests.  No I did not.  
My goal is removal of tech debt -- Filter, and in the process expect some 
performance improvements that Filter was blocking.  So in this issue, anyone 
with non-cached filter queries may see a benefit, especially when those queries 
have TwoPhaseIterators (phrase queries, frange, spatial, more).  The benefit 
may be further pronounced if the main query also has TPIs because Lucene 
cleverly sees through the boolean queries to group the TPIs of required clauses 
in the tree.

> Use TwoPhaseIterator for non-cached filter queries
> --
>
> Key: SOLR-14166
> URL: https://issues.apache.org/jira/browse/SOLR-14166
> Project: Solr
>  Issue Type: Sub-task
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> "fq" filter queries that have cache=false and which aren't processed as a 
> PostFilter (thus either aren't a PostFilter or have a cost < 100) are 
> processed in SolrIndexSearcher using a custom Filter thingy which uses a 
> cost-ordered series of DocIdSetIterators.  This is not TwoPhaseIterator 
> aware, and thus the match() method may be called on docs that ideally would 
> have been filtered by lower-cost filter queries.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-15215) SolrJ: Remove needless Netty dependency

2021-03-07 Thread Mike Drob (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-15215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17296922#comment-17296922
 ] 

Mike Drob commented on SOLR-15215:
--

I was chatting with a few zookeeper folks and they made it sound like netty 
would be the preferred connection approach going forward. I don’t know about 
the performance of our http cluster state provider, so would like to have the 
ZK netty option still available.

Can we add a new module that is just the right  transitive dependencies for 
ZK-netty? Doesn’t need to be a package. 

> SolrJ: Remove needless Netty dependency
> ---
>
> Key: SOLR-15215
> URL: https://issues.apache.org/jira/browse/SOLR-15215
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: David Smiley
>Priority: Major
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> SolrJ depends on Netty transitively via ZooKeeper.  But ZooKeeper's Netty 
> dependency should be considered optional -- you have to opt-in.
> BTW it's only needed in Solr-core because of Hadoop/HDFS which ought to move 
> to a contrib and take this dependency with it over there.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr-operator] chaicesan commented on a change in pull request #231: Add conditional dependency for zk-operator helm chart

2021-03-07 Thread GitBox


chaicesan commented on a change in pull request #231:
URL: 
https://github.com/apache/lucene-solr-operator/pull/231#discussion_r589055356



##
File path: helm/solr-operator/Chart.yaml
##
@@ -95,4 +95,10 @@ annotations:
 name: "example"
 numThreads: 4
 image:
-  tag: 8.7.0
\ No newline at end of file
+  tag: 8.7.0
+
+dependencies:
+  - name: 'zookeeper-operator'
+version: 0.2.9
+repository: https://charts.pravega.io
+condition: useZkOperator

Review comment:
   Tell me what you think of the names I have chosen.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9322) Discussing a unified vectors format API

2021-03-07 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17296873#comment-17296873
 ] 

ASF subversion and git services commented on LUCENE-9322:
-

Commit 606cea94d76ffeb978fb23c32dd4baf848a36baf in lucene-solr's branch 
refs/heads/master from Tomoko Uchida
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=606cea9 ]

LUCENE-9322: trivial fix in documentation.


> Discussing a unified vectors format API
> ---
>
> Key: LUCENE-9322
> URL: https://issues.apache.org/jira/browse/LUCENE-9322
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Julie Tibshirani
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 11h
>  Remaining Estimate: 0h
>
> Two different approximate nearest neighbor approaches are currently being 
> developed, one based on HNSW (LUCENE-9004) and another based on coarse 
> quantization ([#LUCENE-9136]). Each prototype proposes to add a new format to 
> handle vectors. In LUCENE-9136 we discussed the possibility of a unified API 
> that could support both approaches. The two ANN strategies give different 
> trade-offs in terms of speed, memory, and complexity, and it’s likely that 
> we’ll want to support both. Vector search is also an active research area, 
> and it would be great to be able to prototype and incorporate new approaches 
> without introducing more formats.
> To me it seems like a good time to begin discussing a unified API. The 
> prototype for coarse quantization 
> ([https://github.com/apache/lucene-solr/pull/1314]) could be ready to commit 
> soon (this depends on everyone's feedback of course). The approach is simple 
> and shows solid search performance, as seen 
> [here|https://github.com/apache/lucene-solr/pull/1314#issuecomment-608645326].
>  I think this API discussion is an important step in moving that 
> implementation forward.
> The goals of the API would be
> # Support for storing and retrieving individual float vectors.
> # Support for approximate nearest neighbor search -- given a query vector, 
> return the indexed vectors that are closest to it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mikemccand commented on a change in pull request #2459: LUCENE-9825: Hunspell: reverse the "words" trie for faster word lookup/suggestions

2021-03-07 Thread GitBox


mikemccand commented on a change in pull request #2459:
URL: https://github.com/apache/lucene-solr/pull/2459#discussion_r589026064



##
File path: 
lucene/analysis/common/src/java/org/apache/lucene/analysis/hunspell/Stemmer.java
##
@@ -94,6 +94,10 @@ public Stemmer(Dictionary dictionary) {
 }
 
 List list = new ArrayList<>();
+if (length == 0) {

Review comment:
   LOL empty string.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org