[jira] [Commented] (SOLR-13996) Refactor HttpShardHandler#prepDistributed() into smaller pieces

2020-02-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048442#comment-17048442
 ] 

ASF subversion and git services commented on SOLR-13996:


Commit e7a9fd0a370fde241ee8e0dfc46e3c23df06f065 in lucene-solr's branch 
refs/heads/branch_8x from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e7a9fd0 ]

SOLR-13996: Rename LegacyReplicaSource to StandaloneReplicaSource

(cherry picked from commit 4897a647138757a4a111d9f390f07a1bf16e3b40)


> Refactor HttpShardHandler#prepDistributed() into smaller pieces
> ---
>
> Key: SOLR-13996
> URL: https://issues.apache.org/jira/browse/SOLR-13996
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-13996.patch, SOLR-13996.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, it is very hard to understand all the various things being done in 
> HttpShardHandler. I'm starting with refactoring the prepDistributed() method 
> to make it easier to grasp. It has standalone and cloud code intertwined, and 
> wanted to cleanly separate them out. Later, we can even have two separate 
> method (for standalone and cloud, each).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13996) Refactor HttpShardHandler#prepDistributed() into smaller pieces

2020-02-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048441#comment-17048441
 ] 

ASF subversion and git services commented on SOLR-13996:


Commit 4897a647138757a4a111d9f390f07a1bf16e3b40 in lucene-solr's branch 
refs/heads/master from Shalin Shekhar Mangar
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4897a64 ]

SOLR-13996: Rename LegacyReplicaSource to StandaloneReplicaSource


> Refactor HttpShardHandler#prepDistributed() into smaller pieces
> ---
>
> Key: SOLR-13996
> URL: https://issues.apache.org/jira/browse/SOLR-13996
> Project: Solr
>  Issue Type: Improvement
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-13996.patch, SOLR-13996.patch
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently, it is very hard to understand all the various things being done in 
> HttpShardHandler. I'm starting with refactoring the prepDistributed() method 
> to make it easier to grasp. It has standalone and cloud code intertwined, and 
> wanted to cleanly separate them out. Later, we can even have two separate 
> method (for standalone and cloud, each).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] shalinmangar commented on issue #1052: SOLR-13996: Refactoring HttpShardHandler#prepDistributed() into smaller pieces

2020-02-29 Thread GitBox
shalinmangar commented on issue #1052: SOLR-13996: Refactoring 
HttpShardHandler#prepDistributed() into smaller pieces
URL: https://github.com/apache/lucene-solr/pull/1052#issuecomment-593027843
 
 
   I'm closing this in favor of #1220 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] shalinmangar closed pull request #1052: SOLR-13996: Refactoring HttpShardHandler#prepDistributed() into smaller pieces

2020-02-29 Thread GitBox
shalinmangar closed pull request #1052: SOLR-13996: Refactoring 
HttpShardHandler#prepDistributed() into smaller pieces
URL: https://github.com/apache/lucene-solr/pull/1052
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14293) Payloads Are Written or Read Incorrectly - Across the Documents

2020-02-29 Thread Ivan Provalov (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048428#comment-17048428
 ] 

Ivan Provalov commented on SOLR-14293:
--

This is not an issue.  Closing.  The test was not using the payload offsets 
causing this behavior.  Updated the test and added a couple more interesting 
cases.

> Payloads Are Written or Read Incorrectly - Across the Documents
> ---
>
> Key: SOLR-14293
> URL: https://issues.apache.org/jira/browse/SOLR-14293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 5.1, 5.5.5, 6.3, 7.7.2, 8.3.1
>Reporter: Ivan Provalov
>Priority: Critical
>  Labels: codec, format, payload, postings, reader, writer
> Attachments: TestPayloads.java
>
>
> I noticed a weird payload behavior with Solr 6.3.0, also 7.7.2 and 8.3.1.  
> After writing the Lucene62Codec specific unit test (see attached, also can be 
> run with the later versions) I think there could be a bug which allows for 
> the same term payloads to be written into another document's same term 
> payload (or the second payload for the second document not being read 
> correctly).  
>   
>  For comparison, I added SimpleTextCodec which doesn't behave this way. 
>   
>  For 8.3.1, you will need to change MultiFields.getTermPositionsEnum(...) to 
> MultiTerms.getTermPostingsEnum(...).
>   
>  Thanks to Alan Woodward, I made the necessary changes to the analyzer to 
> address the sharing of the TokenStreamComponents which was used in the 
> TestPayloads class.  Now I use non-mocked tokenizer and a new filter which 
> would create a random payload (see attached).  So, doc one and two will have 
> the same token, but different payloads.  
> Same idea, SimpleTextCodec passes the test, but these ones don't:
> Lucene50Codec;
>  Lucene54Codec;
>  Lucene62Codec;
>  Lucene70Codec;
>  Lucene80Codec; 
>   
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14293) Payloads Are Written or Read Incorrectly - Across the Documents

2020-02-29 Thread Ivan Provalov (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Provalov resolved SOLR-14293.
--
Resolution: Not A Problem

The test was not properly set up.  Updated the attached test.

> Payloads Are Written or Read Incorrectly - Across the Documents
> ---
>
> Key: SOLR-14293
> URL: https://issues.apache.org/jira/browse/SOLR-14293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 5.1, 5.5.5, 6.3, 7.7.2, 8.3.1
>Reporter: Ivan Provalov
>Priority: Critical
>  Labels: codec, format, payload, postings, reader, writer
> Attachments: TestPayloads.java
>
>
> I noticed a weird payload behavior with Solr 6.3.0, also 7.7.2 and 8.3.1.  
> After writing the Lucene62Codec specific unit test (see attached, also can be 
> run with the later versions) I think there could be a bug which allows for 
> the same term payloads to be written into another document's same term 
> payload (or the second payload for the second document not being read 
> correctly).  
>   
>  For comparison, I added SimpleTextCodec which doesn't behave this way. 
>   
>  For 8.3.1, you will need to change MultiFields.getTermPositionsEnum(...) to 
> MultiTerms.getTermPostingsEnum(...).
>   
>  Thanks to Alan Woodward, I made the necessary changes to the analyzer to 
> address the sharing of the TokenStreamComponents which was used in the 
> TestPayloads class.  Now I use non-mocked tokenizer and a new filter which 
> would create a random payload (see attached).  So, doc one and two will have 
> the same token, but different payloads.  
> Same idea, SimpleTextCodec passes the test, but these ones don't:
> Lucene50Codec;
>  Lucene54Codec;
>  Lucene62Codec;
>  Lucene70Codec;
>  Lucene80Codec; 
>   
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14293) Payloads Are Written or Read Incorrectly - Across the Documents

2020-02-29 Thread Ivan Provalov (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Provalov updated SOLR-14293:
-
Attachment: (was: TestPayloads.java)

> Payloads Are Written or Read Incorrectly - Across the Documents
> ---
>
> Key: SOLR-14293
> URL: https://issues.apache.org/jira/browse/SOLR-14293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 5.1, 5.5.5, 6.3, 7.7.2, 8.3.1
>Reporter: Ivan Provalov
>Priority: Critical
>  Labels: codec, format, payload, postings, reader, writer
> Attachments: TestPayloads.java
>
>
> I noticed a weird payload behavior with Solr 6.3.0, also 7.7.2 and 8.3.1.  
> After writing the Lucene62Codec specific unit test (see attached, also can be 
> run with the later versions) I think there could be a bug which allows for 
> the same term payloads to be written into another document's same term 
> payload (or the second payload for the second document not being read 
> correctly).  
>   
>  For comparison, I added SimpleTextCodec which doesn't behave this way. 
>   
>  For 8.3.1, you will need to change MultiFields.getTermPositionsEnum(...) to 
> MultiTerms.getTermPostingsEnum(...).
>   
>  Thanks to Alan Woodward, I made the necessary changes to the analyzer to 
> address the sharing of the TokenStreamComponents which was used in the 
> TestPayloads class.  Now I use non-mocked tokenizer and a new filter which 
> would create a random payload (see attached).  So, doc one and two will have 
> the same token, but different payloads.  
> Same idea, SimpleTextCodec passes the test, but these ones don't:
> Lucene50Codec;
>  Lucene54Codec;
>  Lucene62Codec;
>  Lucene70Codec;
>  Lucene80Codec; 
>   
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14293) Payloads Are Written or Read Incorrectly - Across the Documents

2020-02-29 Thread Ivan Provalov (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ivan Provalov updated SOLR-14293:
-
Attachment: TestPayloads.java

> Payloads Are Written or Read Incorrectly - Across the Documents
> ---
>
> Key: SOLR-14293
> URL: https://issues.apache.org/jira/browse/SOLR-14293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 5.1, 5.5.5, 6.3, 7.7.2, 8.3.1
>Reporter: Ivan Provalov
>Priority: Critical
>  Labels: codec, format, payload, postings, reader, writer
> Attachments: TestPayloads.java
>
>
> I noticed a weird payload behavior with Solr 6.3.0, also 7.7.2 and 8.3.1.  
> After writing the Lucene62Codec specific unit test (see attached, also can be 
> run with the later versions) I think there could be a bug which allows for 
> the same term payloads to be written into another document's same term 
> payload (or the second payload for the second document not being read 
> correctly).  
>   
>  For comparison, I added SimpleTextCodec which doesn't behave this way. 
>   
>  For 8.3.1, you will need to change MultiFields.getTermPositionsEnum(...) to 
> MultiTerms.getTermPostingsEnum(...).
>   
>  Thanks to Alan Woodward, I made the necessary changes to the analyzer to 
> address the sharing of the TokenStreamComponents which was used in the 
> TestPayloads class.  Now I use non-mocked tokenizer and a new filter which 
> would create a random payload (see attached).  So, doc one and two will have 
> the same token, but different payloads.  
> Same idea, SimpleTextCodec passes the test, but these ones don't:
> Lucene50Codec;
>  Lucene54Codec;
>  Lucene62Codec;
>  Lucene70Codec;
>  Lucene80Codec; 
>   
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14258) DocList (DocSlice) should not implement DocSet

2020-02-29 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048405#comment-17048405
 ] 

David Smiley commented on SOLR-14258:
-

Okay; I marked them deprecated.  After 8.5 I'll back-port the changes.

> DocList (DocSlice) should not implement DocSet
> --
>
> Key: SOLR-14258
> URL: https://issues.apache.org/jira/browse/SOLR-14258
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: master (9.0)
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> DocList is an internal interface used to hold the documents we'll ultimately 
> return from search.  It has one implementation -- DocSlice.  It implements 
> DocSet but I think that was a mistake.  Basically no-where does Solr depend 
> on the fact that a DocList is a DocSet today, and keeping it this way 
> complicates maintenance on DocSet's abstraction.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14258) DocList (DocSlice) should not implement DocSet

2020-02-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048404#comment-17048404
 ] 

ASF subversion and git services commented on SOLR-14258:


Commit 37281783209cef2e7d30d21a298bea671a1ea52b in lucene-solr's branch 
refs/heads/branch_8x from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3728178 ]

SOLR-14256, SOLR-14258: Deprecations


> DocList (DocSlice) should not implement DocSet
> --
>
> Key: SOLR-14258
> URL: https://issues.apache.org/jira/browse/SOLR-14258
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: master (9.0)
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> DocList is an internal interface used to hold the documents we'll ultimately 
> return from search.  It has one implementation -- DocSlice.  It implements 
> DocSet but I think that was a mistake.  Basically no-where does Solr depend 
> on the fact that a DocList is a DocSet today, and keeping it this way 
> complicates maintenance on DocSet's abstraction.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14256) Remove HashDocSet

2020-02-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14256?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048403#comment-17048403
 ] 

ASF subversion and git services commented on SOLR-14256:


Commit 37281783209cef2e7d30d21a298bea671a1ea52b in lucene-solr's branch 
refs/heads/branch_8x from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3728178 ]

SOLR-14256, SOLR-14258: Deprecations


> Remove HashDocSet
> -
>
> Key: SOLR-14256
> URL: https://issues.apache.org/jira/browse/SOLR-14256
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: master (9.0)
>
>
> This particular DocSet is only used in places where we need to convert 
> SortedIntDocSet in particular to a DocSet that is fast for random access.  
> Once such a conversion happens, it's only used to test some docs for presence 
> and it could be another interface.  DocSet has kind of a large-ish API 
> surface area to implement.  Since we only need to test docs, we could use 
> Bits interface (having only 2 methods) backed by an off-the-shelf primitive 
> long hash set on our classpath.  Perhaps a new method on DocSet: getBits() or 
> DocSetUtil.getBits(DocSet).
> In addition to removing complexity unto itself, this improvement is required 
> by SOLR-14185 because it wants to be able to produce a DocIdSetIterator slice 
> directly from the DocSet but HashDocSet can't do that without sorting first.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] iverase commented on issue #1290: LUCENE-9251: Filter equal edges with different value on isEdgeFromPolygon

2020-02-29 Thread GitBox
iverase commented on issue #1290: LUCENE-9251: Filter equal edges with 
different value on isEdgeFromPolygon
URL: https://github.com/apache/lucene-solr/pull/1290#issuecomment-592985468
 
 
   I agree but the issue is that the polygon is too big to add it to the unit 
test and there is no really framework to read it from a file. My suggestion is 
to push this change as it is and open a Lucene issue to be able to test this 
kind go big polygons. Probably adding it to the test framework? wdyt?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] iverase commented on a change in pull request #1290: LUCENE-9251: Filter equal edges with different value on isEdgeFromPolygon

2020-02-29 Thread GitBox
iverase commented on a change in pull request #1290: LUCENE-9251: Filter equal 
edges with different value on isEdgeFromPolygon
URL: https://github.com/apache/lucene-solr/pull/1290#discussion_r386049672
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/geo/Tessellator.java
 ##
 @@ -915,13 +914,14 @@ private static final Node filterPoints(final Node start, 
Node end) {
   continueIteration = false;
   nextNode = node.next;
   prevNode = node.previous;
-  //We can filter points when they are the same, if not and they are 
co-linear we can only
-  //remove it if both edges have the same value in .isNextEdgeFromPolygon
-  if (isVertexEquals(node, nextNode)  ||
-  (prevNode.isNextEdgeFromPolygon == node.isNextEdgeFromPolygon &&
+  // we can filter points when:
+  if (isVertexEquals(node, nextNode)  ||   // 1. they are the same,
+ // isVertexEquals(prevNode, nextNode) || // 2.- each one starts and 
ends in each other
 
 Review comment:
   ups... that was a left over while testing


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on a change in pull request #1253: LUCENE-9150: Restore support for dynamic PlanetModel in spatial3d

2020-02-29 Thread GitBox
nknize commented on a change in pull request #1253: LUCENE-9150: Restore 
support for dynamic PlanetModel in spatial3d
URL: https://github.com/apache/lucene-solr/pull/1253#discussion_r386046464
 
 

 ##
 File path: 
lucene/spatial3d/src/test/org/apache/lucene/spatial3d/TestGeo3DPoint.java
 ##
 @@ -84,6 +85,10 @@
 
 public class TestGeo3DPoint extends LuceneTestCase {
 
+  protected PlanetModel randomPlanetModel() {
+return RandomPicks.randomFrom(random(), new PlanetModel[] 
{/*PlanetModel.WGS84,*/ PlanetModel.CLARKE_1866});
+  }
 
 Review comment:
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on a change in pull request #1253: LUCENE-9150: Restore support for dynamic PlanetModel in spatial3d

2020-02-29 Thread GitBox
nknize commented on a change in pull request #1253: LUCENE-9150: Restore 
support for dynamic PlanetModel in spatial3d
URL: https://github.com/apache/lucene-solr/pull/1253#discussion_r386046274
 
 

 ##
 File path: 
lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/PlanetModel.java
 ##
 @@ -118,71 +176,191 @@ public PlanetModel(final InputStream inputStream) 
throws IOException {
   
   @Override
   public void write(final OutputStream outputStream) throws IOException {
-SerializableObject.writeDouble(outputStream, ab);
-SerializableObject.writeDouble(outputStream, c);
+SerializableObject.writeDouble(outputStream, xyScaling);
+SerializableObject.writeDouble(outputStream, zScaling);
   }
   
   /** Does this planet model describe a sphere?
*@return true if so.
*/
   public boolean isSphere() {
-return this.ab == this.c;
+return this.xyScaling == this.zScaling;
   }
   
   /** Find the minimum magnitude of all points on the ellipsoid.
* @return the minimum magnitude for the planet.
*/
   public double getMinimumMagnitude() {
-return Math.min(this.ab, this.c);
+return Math.min(this.xyScaling, this.zScaling);
   }
 
   /** Find the maximum magnitude of all points on the ellipsoid.
* @return the maximum magnitude for the planet.
*/
   public double getMaximumMagnitude() {
-return Math.max(this.ab, this.c);
+return Math.max(this.xyScaling, this.zScaling);
   }
   
   /** Find the minimum x value.
*@return the minimum X value.
*/
   public double getMinimumXValue() {
-return -this.ab;
+return -this.xyScaling;
   }
   
   /** Find the maximum x value.
*@return the maximum X value.
*/
   public double getMaximumXValue() {
-return this.ab;
+return this.xyScaling;
   }
 
   /** Find the minimum y value.
*@return the minimum Y value.
*/
   public double getMinimumYValue() {
-return -this.ab;
+return -this.xyScaling;
   }
   
   /** Find the maximum y value.
*@return the maximum Y value.
*/
   public double getMaximumYValue() {
-return this.ab;
+return this.xyScaling;
   }
   
   /** Find the minimum z value.
*@return the minimum Z value.
*/
   public double getMinimumZValue() {
-return -this.c;
+return -this.zScaling;
   }
   
   /** Find the maximum z value.
*@return the maximum Z value.
*/
   public double getMaximumZValue() {
-return this.c;
+return this.zScaling;
+  }
+
+  /** return the calculated mean radius (in meters) */
+  public double getMeanRadiusMeters() {
+return this.r1;
+  }
+
+  /** encode the provided value from double to integer space */
+  public int encodeValue(double x) {
+if (x > getMaximumMagnitude()) {
+  throw new IllegalArgumentException("value=" + x + " is out-of-bounds 
(greater than planetMax=" + getMaximumMagnitude() + ")");
+}
+if (x == getMaximumMagnitude()) {
+  x = Math.nextDown(x);
+}
+if (x < -getMaximumMagnitude()) {
+  throw new IllegalArgumentException("value=" + x + " is out-of-bounds 
(less than than -planetMax=" + -getMaximumMagnitude() + ")");
+}
+long result = (long) Math.floor(x / DECODE);
+assert result >= Integer.MIN_VALUE;
+assert result <= Integer.MAX_VALUE;
+return (int) result;
+  }
+
+  /**
+   * Decodes a given integer back into the radian value according to the 
defined planet model
+   */
+  public double decodeValue(int x) {
+double result;
+if (x == MIN_ENCODED_VALUE) {
+  // We must special case this, because -MAX_VALUE is not guaranteed to 
land precisely at a floor value, and we don't ever want to
+  // return a value outside of the planet's range (I think?).  The max 
value is "safe" because we floor during encode:
+  result = -MAX_VALUE;
+} else if (x == MAX_ENCODED_VALUE) {
+  result = MAX_VALUE;
+} else {
+  // We decode to the center value; this keeps the encoding stable
+  result = (x+0.5) * DECODE;
+}
+assert result >= -MAX_VALUE && result <= MAX_VALUE;
+return result;
+  }
+
+  /** Encode a provided GeoPoint into DocValue sortable integer space */
 
 Review comment:
   I added a getter for the docValueEncoder in PlanetModel


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on a change in pull request #1253: LUCENE-9150: Restore support for dynamic PlanetModel in spatial3d

2020-02-29 Thread GitBox
nknize commented on a change in pull request #1253: LUCENE-9150: Restore 
support for dynamic PlanetModel in spatial3d
URL: https://github.com/apache/lucene-solr/pull/1253#discussion_r386046244
 
 

 ##
 File path: 
lucene/spatial3d/src/java/org/apache/lucene/spatial3d/geom/PlanetModel.java
 ##
 @@ -20,6 +20,8 @@
 import java.io.OutputStream;
 import java.io.IOException;
 
+import org.apache.lucene.spatial3d.Geo3DDocValuesField.DocValueEncoder;
+
 
 Review comment:
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] nknize commented on a change in pull request #1253: LUCENE-9150: Restore support for dynamic PlanetModel in spatial3d

2020-02-29 Thread GitBox
nknize commented on a change in pull request #1253: LUCENE-9150: Restore 
support for dynamic PlanetModel in spatial3d
URL: https://github.com/apache/lucene-solr/pull/1253#discussion_r386046242
 
 

 ##
 File path: 
lucene/spatial3d/src/java/org/apache/lucene/spatial3d/Geo3DDocValuesField.java
 ##
 @@ -478,9 +303,211 @@ public static SortField newOutsideLargePolygonSort(final 
String field, final Pol
* @return SortField ordering documents by distance
* @throws IllegalArgumentException if {@code field} is null or path has 
invalid coordinates.
*/
-  public static SortField newOutsidePathSort(final String field, final 
double[] pathLatitudes, final double[] pathLongitudes, final double 
pathWidthMeters) {
-final GeoOutsideDistance shape = Geo3DUtil.fromPath(pathLatitudes, 
pathLongitudes, pathWidthMeters);
-return new Geo3DPointOutsideSortField(field, shape);
+  public static SortField newOutsidePathSort(final String field, final 
double[] pathLatitudes, final double[] pathLongitudes, final double 
pathWidthMeters, final PlanetModel planetModel) {
+final GeoOutsideDistance shape = Geo3DUtil.fromPath(planetModel, 
pathLatitudes, pathLongitudes, pathWidthMeters);
+return new Geo3DPointOutsideSortField(field, planetModel, shape);
   }
 
+  /** Utility class for encoding / decoding from lat/lon (decimal degrees) 
into sortable doc value numerics (integers) */
+  public static class DocValueEncoder {
 
 Review comment:
   +1


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14258) DocList (DocSlice) should not implement DocSet

2020-02-29 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048291#comment-17048291
 ] 

Mikhail Khludnev edited comment on SOLR-14258 at 2/29/20 12:22 PM:
---

Giving that we are heading 8.5, why don't release it with deprecations, and 
commit removals into brant_8x right after release?


was (Author: mkhludnev):
Giving that we are heading 8.5, why don't release it with deprecations, and 
commit removals into brant_8x right after release. 

> DocList (DocSlice) should not implement DocSet
> --
>
> Key: SOLR-14258
> URL: https://issues.apache.org/jira/browse/SOLR-14258
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: master (9.0)
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> DocList is an internal interface used to hold the documents we'll ultimately 
> return from search.  It has one implementation -- DocSlice.  It implements 
> DocSet but I think that was a mistake.  Basically no-where does Solr depend 
> on the fact that a DocList is a DocSet today, and keeping it this way 
> complicates maintenance on DocSet's abstraction.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14258) DocList (DocSlice) should not implement DocSet

2020-02-29 Thread Mikhail Khludnev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048291#comment-17048291
 ] 

Mikhail Khludnev commented on SOLR-14258:
-

Giving that we are heading 8.5, why don't release it with deprecations, and 
commit removals into brant_8x right after release. 

> DocList (DocSlice) should not implement DocSet
> --
>
> Key: SOLR-14258
> URL: https://issues.apache.org/jira/browse/SOLR-14258
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: master (9.0)
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> DocList is an internal interface used to hold the documents we'll ultimately 
> return from search.  It has one implementation -- DocSlice.  It implements 
> DocSet but I think that was a mistake.  Basically no-where does Solr depend 
> on the fact that a DocList is a DocSet today, and keeping it this way 
> complicates maintenance on DocSet's abstraction.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9114) Add FunctionValues.cost

2020-02-29 Thread Atri Sharma (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048261#comment-17048261
 ] 

Atri Sharma commented on LUCENE-9114:
-

I strongly believe that this is the right approach and we should be pursuing 
this. I am actively working on this and will post a patch by Monday morning

> Add FunctionValues.cost
> ---
>
> Key: LUCENE-9114
> URL: https://issues.apache.org/jira/browse/LUCENE-9114
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/query
>Reporter: David Smiley
>Priority: Major
>
> The FunctionRangeQuery uses FunctionValues.getRangeScorer which returns a 
> subclass of  ValueSourceScorer.  VSC's TwoPhaseIterator has a matchCost impl 
> that returns a constant 100.  This is pretty terrible; the cost should vary 
> based on the complexity of the ValueSource provided to FRQ.  ValueSource's 
> are typically nested a number of levels, so they should aggregate.
> BTW there is a parallel concern for FunctionMatchQuery which works with 
> DoubleValuesSource which doesn't have a cost either, and unsurprisingly there 
> is a TPI with matchCost 100 there.  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14295) Add the parameter description about "discardCompoundToken" for JapaneseTokenizer

2020-02-29 Thread Tomoko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida resolved SOLR-14295.
--
Fix Version/s: 8.5
   master (9.0)
   Resolution: Fixed

> Add the parameter description about "discardCompoundToken" for 
> JapaneseTokenizer
> 
>
> Key: SOLR-14295
> URL: https://issues.apache.org/jira/browse/SOLR-14295
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
> Fix For: master (9.0), 8.5
>
> Attachments: SOLR-14295.patch
>
>
> In [LUCENE-9123], a parameter {{discardCompoundToken}} was added to 
> JapaneseTokenizer(Factory).
> The ref-guide needs to be updated to let Solr users know about this change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14295) Add the parameter description about "discardCompoundToken" for JapaneseTokenizer

2020-02-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048258#comment-17048258
 ] 

ASF subversion and git services commented on SOLR-14295:


Commit 3ab908afc0804abfee387973dc757d783d18fa9d in lucene-solr's branch 
refs/heads/branch_8x from Tomoko Uchida
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=3ab908a ]

SOLR-14295: Add the parameter description about 'discardCompoundToken' for 
JapaneseTokenizer in RefGuide


> Add the parameter description about "discardCompoundToken" for 
> JapaneseTokenizer
> 
>
> Key: SOLR-14295
> URL: https://issues.apache.org/jira/browse/SOLR-14295
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
> Attachments: SOLR-14295.patch
>
>
> In [LUCENE-9123], a parameter {{discardCompoundToken}} was added to 
> JapaneseTokenizer(Factory).
> The ref-guide needs to be updated to let Solr users know about this change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11746) numeric fields need better error handling for prefix/wildcard syntax -- consider uniform support for "foo:* == foo:[* TO *]"

2020-02-29 Thread Tomoko Uchida (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048257#comment-17048257
 ] 

Tomoko Uchida commented on SOLR-11746:
--

It seems like the Ref Guide build is now failing due to the changes here.
{code:java}
solr-ref-guide $ ant build-site
...
build-site:
 [java] Relative link points at id that doesn't exist in dest: 
#differences-between-lucenes-classic-query-parser-and-solrs-standard-query-parser
 [java]  ... source: 
file:/mnt/hdd/repo/lucene-solr/solr/build/solr-ref-guide/html-site/the-standard-query-parser.html
 [java] Relative link points at id that doesn't exist in dest: 
the-standard-query-parser.html#differences-between-lucenes-classic-query-parser-and-solrs-standard-query-parser
 [java]  ... source: 
file:/mnt/hdd/repo/lucene-solr/solr/build/solr-ref-guide/html-site/common-query-parameters.html
 [java] Processed 2611 links (1932 relative) to 3477 anchors in 262 files
 [java] Total of 2 problems found

BUILD FAILED
/mnt/hdd/repo/lucene-solr/solr/solr-ref-guide/build.xml:251: Java returned: 255
{code}
The build works for me when I removed those two lines.
{code:java}
--- a/solr/solr-ref-guide/src/common-query-parameters.adoc
+++ b/solr/solr-ref-guide/src/common-query-parameters.adoc
@@ -102,7 +102,7 @@ fq=+popularity:[10 TO *] +section:0
 
 
 * The document sets from each filter query are cached independently. Thus, 
concerning the previous examples: use a single `fq` containing two mandatory 
clauses if those clauses appear together often, and use two separate `fq` 
parameters if they are relatively independent. (To learn about tuning cache 
sizes and making sure a filter cache actually exists, see 
<>.)
-* It is also possible to use 
<> inside the `fq` to cache clauses individually and - among other 
things - to achieve union of cached filter queries.
+// * It is also possible to use 
<> inside the `fq` to cache clauses individually and - among other 
things - to achieve union of cached filter queries.

diff --git a/solr/solr-ref-guide/src/the-standard-query-parser.adoc 
b/solr/solr-ref-guide/src/the-standard-query-parser.adoc
index c572e503e5b..3a3cd7f958d 100644
--- a/solr/solr-ref-guide/src/the-standard-query-parser.adoc
+++ b/solr/solr-ref-guide/src/the-standard-query-parser.adoc
@@ -174,7 +174,7 @@ The brackets around a query determine its inclusiveness.
 * You can mix these types so one end of the range is inclusive and the other 
is exclusive. Here's an example: `count:{1 TO 10]`
 
 Wildcards, `*`, can also be used for either or both endpoints to specify an 
open-ended range query.
-This is a 
<<#differences-between-lucenes-classic-query-parser-and-solrs-standard-query-parser,divergence
 from Lucene's Classic Query Parser>>.
+// This is a 
<<#differences-between-lucenes-classic-query-parser-and-solrs-standard-query-parser,divergence
 from Lucene's Classic Query Parser>>.
{code}
I know nothing about this issue, just noticed the broken links when I updated 
the ref-guide on another issue...

> numeric fields need better error handling for prefix/wildcard syntax -- 
> consider uniform support for "foo:* == foo:[* TO *]"
> 
>
> Key: SOLR-11746
> URL: https://issues.apache.org/jira/browse/SOLR-11746
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 7.0
>Reporter: Chris M. Hostetter
>Assignee: Houston Putman
>Priority: Major
> Fix For: master (9.0), 8.5
>
> Attachments: SOLR-11746.patch, SOLR-11746.patch, SOLR-11746.patch, 
> SOLR-11746.patch, SOLR-11746.patch, SOLR-11746.patch, SOLR-11746.patch, 
> SOLR-11746.patch, SOLR-11746.patch, SOLR-11746.patch, SOLR-11746.patch
>
>
> On the solr-user mailing list, Torsten Krah pointed out that with Trie 
> numeric fields, query syntax such as {{foo_d:\*}} has been functionality 
> equivilent to {{foo_d:\[\* TO \*]}} and asked why this was not also supported 
> for Point based numeric fields.
> The fact that this type of syntax works (for {{indexed="true"}} Trie fields) 
> appears to have been an (untested, undocumented) fluke of Trie fields given 
> that they use indexed terms for the (encoded) numeric terms and inherit the 
> default implementation of {{FieldType.getPrefixQuery}} which produces a 
> prefix query against the {{""}} (empty string) term.  
> (Note that this syntax has aparently _*never*_ worked for Trie fields with 
> {{indexed="false" docValues="true"}} )
> In general, we should assess the behavior users attempt a prefix/wildcard 
> syntax query against numeric fields, as currently the behavior is largely 
> non-sensical:  prefix/wildcard syntax frequently match no docs w/o any sort 
> of error, and the aformentioned {{numeric_field:*}} behaves inconsistently 

[jira] [Commented] (SOLR-14291) OldAnalyticsRequestConverter should support fields names with dots

2020-02-29 Thread Anatolii Siuniaev (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048251#comment-17048251
 ] 

Anatolii Siuniaev commented on SOLR-14291:
--

Yep, I'll create a patch in a couple of days. 
What do you mean by that article? 

> OldAnalyticsRequestConverter should support fields names with dots
> --
>
> Key: SOLR-14291
> URL: https://issues.apache.org/jira/browse/SOLR-14291
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SearchComponents - other
>Reporter: Anatolii Siuniaev
>Priority: Trivial
>
> If you send a query with range facets using old olap-style syntax (see here), 
> OldAnalyticsRequestConverter just silently (no exception thrown) omits 
> parameters like
> {code:java}
> olap..rangefacet..start
> {code}
> in case if __ has dots inside (for instance field name is 
> _Project.Value_). And thus no range facets are returned in response.  
> Probably the same happens in case of field faceting. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14295) Add the parameter description about "discardCompoundToken" for JapaneseTokenizer

2020-02-29 Thread Tomoko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida updated SOLR-14295:
-
Summary: Add the parameter description about "discardCompoundToken" for 
JapaneseTokenizer  (was: Add the parameter descriptionn about 
"discardCompoundToken" for JapaneseTokenizer)

> Add the parameter description about "discardCompoundToken" for 
> JapaneseTokenizer
> 
>
> Key: SOLR-14295
> URL: https://issues.apache.org/jira/browse/SOLR-14295
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
> Attachments: SOLR-14295.patch
>
>
> In [LUCENE-9123], a parameter {{discardCompoundToken}} was added to 
> JapaneseTokenizer(Factory).
> The ref-guide needs to be updated to let Solr users know this change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14295) Add the parameter description about "discardCompoundToken" for JapaneseTokenizer

2020-02-29 Thread Tomoko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida updated SOLR-14295:
-
Description: 
In [LUCENE-9123], a parameter {{discardCompoundToken}} was added to 
JapaneseTokenizer(Factory).

The ref-guide needs to be updated to let Solr users know about this change.

  was:
In [LUCENE-9123], a parameter {{discardCompoundToken}} was added to 
JapaneseTokenizer(Factory).

The ref-guide needs to be updated to let Solr users know this change.


> Add the parameter description about "discardCompoundToken" for 
> JapaneseTokenizer
> 
>
> Key: SOLR-14295
> URL: https://issues.apache.org/jira/browse/SOLR-14295
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
> Attachments: SOLR-14295.patch
>
>
> In [LUCENE-9123], a parameter {{discardCompoundToken}} was added to 
> JapaneseTokenizer(Factory).
> The ref-guide needs to be updated to let Solr users know about this change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14295) Add the parameter descriptionn about "discardCompoundToken" for JapaneseTokenizer

2020-02-29 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17048250#comment-17048250
 ] 

ASF subversion and git services commented on SOLR-14295:


Commit 5f9bf6b707a398e5cb42bf313b3a444e169ae6fa in lucene-solr's branch 
refs/heads/master from Tomoko Uchida
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5f9bf6b ]

SOLR-14295: Add the parameter description about 'discardCompoundToken' for 
JapaneseTokenizer in RefGuide


> Add the parameter descriptionn about "discardCompoundToken" for 
> JapaneseTokenizer
> -
>
> Key: SOLR-14295
> URL: https://issues.apache.org/jira/browse/SOLR-14295
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
> Attachments: SOLR-14295.patch
>
>
> In [LUCENE-9123], a parameter {{discardCompoundToken}} was added to 
> JapaneseTokenizer(Factory).
> The ref-guide needs to be updated to let Solr users know this change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14295) Add the parameter descriptionn about "discardCompoundToken" for JapaneseTokenizer

2020-02-29 Thread Tomoko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida updated SOLR-14295:
-
Attachment: SOLR-14295.patch

> Add the parameter descriptionn about "discardCompoundToken" for 
> JapaneseTokenizer
> -
>
> Key: SOLR-14295
> URL: https://issues.apache.org/jira/browse/SOLR-14295
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Tomoko Uchida
>Assignee: Tomoko Uchida
>Priority: Minor
> Attachments: SOLR-14295.patch
>
>
> In [LUCENE-9123], a parameter {{discardCompoundToken}} was added to 
> JapaneseTokenizer(Factory).
> The ref-guide needs to be updated to let Solr users know this change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14295) Add the parameter descriptionn about "discardCompoundToken" for JapaneseTokenizer

2020-02-29 Thread Tomoko Uchida (Jira)
Tomoko Uchida created SOLR-14295:


 Summary: Add the parameter descriptionn about 
"discardCompoundToken" for JapaneseTokenizer
 Key: SOLR-14295
 URL: https://issues.apache.org/jira/browse/SOLR-14295
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Tomoko Uchida
Assignee: Tomoko Uchida


In [LUCENE-9123], a parameter {{discardCompoundToken}} was added to 
JapaneseTokenizer(Factory).

The ref-guide needs to be updated to let Solr users know this change.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org