[GitHub] [lucene-solr] jpountz commented on a change in pull request #919: LUCENE-8994: Code Cleanup - Pass values to list constructor instead of empty constructor followed by addAll().

2019-10-03 Thread GitBox
jpountz commented on a change in pull request #919: LUCENE-8994: Code Cleanup - 
Pass values to list constructor instead of empty constructor followed by 
addAll().
URL: https://github.com/apache/lucene-solr/pull/919#discussion_r331361807
 
 

 ##
 File path: lucene/CHANGES.txt
 ##
 @@ -180,6 +180,8 @@ Other
 * LUCENE-8993, LUCENE-8807: Changed all repository and download references in 
build files
   to HTTPS. (Uwe Schindler)
 
+* LUCENE-8994: Code Cleanup - Pass values to list constructor instead of empty 
constructor followed by addAll().
 
 Review comment:
   can you add your name in parentheses?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #881: LUCENE-8979: Code Cleanup: Use entryset for map iteration wherever possible. - part 2

2019-10-03 Thread GitBox
jpountz commented on a change in pull request #881: LUCENE-8979: Code Cleanup: 
Use entryset for map iteration wherever possible. - part 2
URL: https://github.com/apache/lucene-solr/pull/881#discussion_r331360679
 
 

 ##
 File path: lucene/CHANGES.txt
 ##
 @@ -180,6 +180,8 @@ Other
 * LUCENE-8993, LUCENE-8807: Changed all repository and download references in 
build files
   to HTTPS. (Uwe Schindler)
 
+* LUCENE-8979: Code Cleanup: Use entryset for map iteration wherever possible. 
- Part 2
 
 Review comment:
   can you add your name in parentheses like other changes?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8213) Cache costly subqueries asynchronously

2019-10-03 Thread Adrien Grand (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944259#comment-16944259
 ] 

Adrien Grand commented on LUCENE-8213:
--

I don't think these queries have anything special besides being a bit slow.

> Cache costly subqueries asynchronously
> --
>
> Key: LUCENE-8213
> URL: https://issues.apache.org/jira/browse/LUCENE-8213
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/query/scoring
>Affects Versions: 7.2.1
>Reporter: Amir Hadadi
>Priority: Minor
>  Labels: performance
> Attachments: 
> 0001-Reproduce-across-segment-caching-of-same-query.patch, 
> thetaphi_Lucene-Solr-master-Linux_24839.log.txt
>
>  Time Spent: 13h 20m
>  Remaining Estimate: 0h
>
> IndexOrDocValuesQuery allows to combine costly range queries with a selective 
> lead iterator in an optimized way. However, the range query at some point 
> gets cached by a querying thread in LRUQueryCache, which negates the 
> optimization of IndexOrDocValuesQuery for that specific query.
> It would be nice to see an asynchronous caching implementation in such cases, 
> so that queries involving IndexOrDocValuesQuery would have consistent 
> performance characteristics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] iverase commented on a change in pull request #905: LUCENE-8990: Add estimateDocCount(visitor) method to PointValues

2019-10-03 Thread GitBox
iverase commented on a change in pull request #905: LUCENE-8990: Add 
estimateDocCount(visitor) method to PointValues
URL: https://github.com/apache/lucene-solr/pull/905#discussion_r331353384
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/index/PointValues.java
 ##
 @@ -241,9 +241,28 @@ default void visit(DocIdSetIterator iterator, byte[] 
packedValue) throws IOExcep
* than {@link #intersect(IntersectVisitor)}.
* @see DocIdSetIterator#cost */
   public long estimateDocCount(IntersectVisitor visitor) {
-return (long) Math.ceil(estimatePointCount(visitor) / ((double) size() / 
getDocCount()));
+long estimatedPointCount = estimatePointCount(visitor);
+int docCount = getDocCount();
+double size = size();
+if (estimatedPointCount >= size) {
+  // math all docs
+  return docCount;
+} else if (size == docCount || estimatedPointCount == 0L ) {
+  // if the point count estimate is 0 or we have only single values
+  // return this estimate
+  return  estimatedPointCount;
+} else {
+  // in case of multi values estimate the number of docs using the 
solution provided in
+  // 
https://math.stackexchange.com/questions/1175295/urn-problem-probability-of-drawing-balls-of-k-unique-colors
+  // then approximate the solution for points per doc << size() which 
results in the expression
+  // D * (1 - ((N - n) / N)^(N/D))
 
 Review comment:
   It is clarify in the following line.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] iverase commented on issue #905: LUCENE-8990: Add estimateDocCount(visitor) method to PointValues

2019-10-03 Thread GitBox
iverase commented on issue #905: LUCENE-8990: Add estimateDocCount(visitor) 
method to PointValues
URL: https://github.com/apache/lucene-solr/pull/905#issuecomment-538255134
 
 
   @jpountz proved me that the solution can be simplified to the following 
expression if we consider that NumberOfPoints >> NumberValuesPerDoc:
   
   ```
   D * (1 - ((N - n) / N)^(N/D))
   ```
   
   where D is the total number of docs, N the total number of values and n the 
number of counted values.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-8241) Evaluate W-TinyLfu cache

2019-10-03 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki resolved SOLR-8241.

Resolution: Fixed

Thank you all for your contributions and patience! :)

I created a follow-up issue SOLR-13817 to deprecate and eventually remove other 
cache implementations from Solr.

> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Ben Manes
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 8.3
>
> Attachments: EvictionBenchmark.png, GetPutBenchmark.png, 
> SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, 
> SOLR-8241.patch, SOLR-8241.patch, caffeine-benchmark.txt, proposal.patch, 
> solr_caffeine.patch.gz, solr_jmh_results.json
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-8241) Evaluate W-TinyLfu cache

2019-10-03 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki updated SOLR-8241:
---
Fix Version/s: (was: master (9.0))
   8.3

> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Ben Manes
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 8.3
>
> Attachments: EvictionBenchmark.png, GetPutBenchmark.png, 
> SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, 
> SOLR-8241.patch, SOLR-8241.patch, caffeine-benchmark.txt, proposal.patch, 
> solr_caffeine.patch.gz, solr_jmh_results.json
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-13817) Deprecate legacy SolrCache implementations

2019-10-03 Thread Andrzej Bialecki (Jira)
Andrzej Bialecki created SOLR-13817:
---

 Summary: Deprecate legacy SolrCache implementations
 Key: SOLR-13817
 URL: https://issues.apache.org/jira/browse/SOLR-13817
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Andrzej Bialecki
Assignee: Andrzej Bialecki


Now that SOLR-8241 has been committed I propose to deprecate other cache 
implementations in 8x and remove them altogether from 9.0, in order to reduce 
confusion and maintenance costs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944245#comment-16944245
 ] 

ASF subversion and git services commented on SOLR-8241:
---

Commit ae80c181d80aad422faf7fdfb8a1c699a59d49d6 in lucene-solr's branch 
refs/heads/branch_8x from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ae80c18 ]

SOLR-8241: Add CaffeineCache, an efficient implementation of SolrCache.


> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Ben Manes
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: EvictionBenchmark.png, GetPutBenchmark.png, 
> SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, 
> SOLR-8241.patch, SOLR-8241.patch, caffeine-benchmark.txt, proposal.patch, 
> solr_caffeine.patch.gz, solr_jmh_results.json
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8860) LatLonShapeBoundingBoxQuery could make more decisions on inner nodes

2019-10-03 Thread Ignacio Vera (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera resolved LUCENE-8860.
--
Fix Version/s: 8.3
 Assignee: Ignacio Vera
   Resolution: Fixed

Thanks [~imotov]!

> LatLonShapeBoundingBoxQuery could make more decisions on inner nodes
> 
>
> Key: LUCENE-8860
> URL: https://issues.apache.org/jira/browse/LUCENE-8860
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Ignacio Vera
>Priority: Minor
> Fix For: 8.3
>
> Attachments: fig1.png, fig2.png, fig3.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently LatLonShapeBoundingBoxQuery with the INTERSECTS relation only 
> returns CELL_INSIDE_QUERY if the query contains ALL minimum bounding 
> rectangles of the indexed triangles.
> I think we could return CELL_INSIDE_QUERY if the box contains either of the 
> edges of all MBRs of indexed triangles since triangles are guaranteed to 
> touch all edges of their MBR by definition. In some cases this would help 
> save decoding triangles and running costly point-in-triangle computations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8860) LatLonShapeBoundingBoxQuery could make more decisions on inner nodes

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944238#comment-16944238
 ] 

ASF subversion and git services commented on LUCENE-8860:
-

Commit 800971020aa2a35f9b2ba1b76f7bca244f005f7d in lucene-solr's branch 
refs/heads/branch_8x from Igor Motov
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8009710 ]

LUCENE-8860: add additional leaf node level optimizations in 
LatLonShapeBoundingBoxQuery. (#844)

# Conflicts:
#   lucene/sandbox/src/java/org/apache/lucene/geo/Rectangle2D.java


> LatLonShapeBoundingBoxQuery could make more decisions on inner nodes
> 
>
> Key: LUCENE-8860
> URL: https://issues.apache.org/jira/browse/LUCENE-8860
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: fig1.png, fig2.png, fig3.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently LatLonShapeBoundingBoxQuery with the INTERSECTS relation only 
> returns CELL_INSIDE_QUERY if the query contains ALL minimum bounding 
> rectangles of the indexed triangles.
> I think we could return CELL_INSIDE_QUERY if the box contains either of the 
> edges of all MBRs of indexed triangles since triangles are guaranteed to 
> touch all edges of their MBR by definition. In some cases this would help 
> save decoding triangles and running costly point-in-triangle computations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] iverase merged pull request #844: LUCENE-8860: Make more decision on inner nodes in ShapeBoundingBoxQuery

2019-10-03 Thread GitBox
iverase merged pull request #844: LUCENE-8860: Make more decision on inner 
nodes in ShapeBoundingBoxQuery
URL: https://github.com/apache/lucene-solr/pull/844
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8860) LatLonShapeBoundingBoxQuery could make more decisions on inner nodes

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944225#comment-16944225
 ] 

ASF subversion and git services commented on LUCENE-8860:
-

Commit d4ab808a8ab8c58a9ddbc7d4f108df7f1f4c0b51 in lucene-solr's branch 
refs/heads/master from Igor Motov
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d4ab808 ]

LUCENE-8860: add additional leaf node level optimizations in 
LatLonShapeBoundingBoxQuery. (#844)



> LatLonShapeBoundingBoxQuery could make more decisions on inner nodes
> 
>
> Key: LUCENE-8860
> URL: https://issues.apache.org/jira/browse/LUCENE-8860
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: fig1.png, fig2.png, fig3.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently LatLonShapeBoundingBoxQuery with the INTERSECTS relation only 
> returns CELL_INSIDE_QUERY if the query contains ALL minimum bounding 
> rectangles of the indexed triangles.
> I think we could return CELL_INSIDE_QUERY if the box contains either of the 
> edges of all MBRs of indexed triangles since triangles are guaranteed to 
> touch all edges of their MBR by definition. In some cases this would help 
> save decoding triangles and running costly point-in-triangle computations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8999) expectThrows doesn't play nicely with "assume" failures

2019-10-03 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944214#comment-16944214
 ] 

Munendra S N commented on LUCENE-8999:
--

[~hossman]
Thanks for fixing this. I missed it in LUCENE-8938
All changes LGTM.
If I may point to some typos (would be nice if corrected :) )
* In javadoc for {{_expectThrows}} in LuceneTestCase,  {{linke}} should have 
been {{link}}
* In {{TestExpectThrows}}, {{propogated}} should have been {{propagated}}

> expectThrows doesn't play nicely with "assume" failures
> ---
>
> Key: LUCENE-8999
> URL: https://issues.apache.org/jira/browse/LUCENE-8999
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: LUCENE-8999.patch
>
>
> Once upon a time, {{TestRunWithRestrictedPermissions}} use to have test 
> methods that looked like this...
> {code:java}
> try {
>   runWithRestrictedPermissions(this::doSomeForbiddenStuff);
>   fail("this should not pass!");
> } catch (SecurityException se) {
>   // pass
> }
> {code}
> LUCENE-8938 changed this code to look like this...
> {code:java}
> expectThrows(SecurityException.class, () -> 
> runWithRestrictedPermissions(this::doSomeForbiddenStuff));
> {code}
> But a nuance of the existing code that isn't captured in the new code is that 
> {{runWithRestrictedPermissions(...)}} explicitly uses {{assumeTrue(..., 
> System.getSecurityManager() != null)}} to ensure that if a security manager 
> is not in use, the test should be {{SKIPed}} and not considered a pass or a 
> fail.
> The key issue being that {{assumeTrue(...)}} (and other 'assume' related 
> methods like it) throws an {{AssumptionViolatedException}} when the condition 
> isn't met, expecting this to propagate up to the Test Runner.
> With the _old_ code this worked as expected - the 
> {{AssumptionViolatedException}} would abort execution before the 
> {{fail(...)}} but not be caught by the {{catch}} and bubble up all the way to 
> the test runner so the test would be recorded as a SKIP.
> With the new code, {{expectThrows()}} is catching the 
> {{AssumptionViolatedException}} and since it doesn't match the expected 
> {{SecurityException.class}} is generating a test failure instead...
> {noformat}
>[junit4] Suite: org.apache.lucene.util.TestRunWithRestrictedPermissions
>[junit4]   2> NOTE: download the large Jenkins line-docs file by running 
> 'ant get-jenkins-line-docs' in the lucene directory.
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestRunWithRestrictedPermissions 
> -Dtests.method=testCompletelyForbidden2 -Dtests.seed=4181E5FE9E84DBC4 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=luy -Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.10s J7  | 
> TestRunWithRestrictedPermissions.testCompletelyForbidden2 <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> Unexpected exception type, expected SecurityException but got 
> org.junit.AssumptionViolatedException: runWithRestrictedPermissions requires 
> a SecurityManager enabled
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([4181E5FE9E84DBC4:16509163A0E04B41]:0)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2729)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2718)
>[junit4]>at 
> org.apache.lucene.util.TestRunWithRestrictedPermissions.testCompletelyForbidden2(TestRunWithRestrictedPermissions.java:39)
>[junit4]>at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
>[junit4]>at java.base/java.lang.Thread.run(Thread.java:834)
>[junit4]> Caused by: org.junit.AssumptionViolatedException: 
> runWithRestrictedPermissions requires a SecurityManager enabled
>[junit4]>at 
> com.carrotsearch.randomizedtesting.RandomizedTest.assumeTrue(RandomizedTest.java:725)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.assumeTrue(LuceneTestCase.java:873)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.runWithRestrictedPermissions(LuceneTestCase.java:2917)
>[junit4]>at 
> org.apache.lucene.util.TestR

[jira] [Commented] (SOLR-13787) An annotation based system to write v2 only APIs

2019-10-03 Thread Noble Paul (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944195#comment-16944195
 ] 

Noble Paul commented on SOLR-13787:
---

I've changed the annotations a bit. If you do not have a proper json payload  
command like the schema API or config API, you do not need a schema. Everything 
can be done with annotations. In the absence of json schema for a command the 
introspect may not be able to give out the schema of the payload but everything 
can work without a schema 

> An annotation based system to write v2 only APIs
> 
>
> Key: SOLR-13787
> URL: https://issues.apache.org/jira/browse/SOLR-13787
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> example v2 API may look as follows
> {code:java}
> @EndPoint(
>  path = "/api/cluster/package",
>  method = POST,
>  permission = PKG_EDIT
> )
> static class PkgEdit {
>  @Command(name = "add", jsonSchema = "cluster.package.add.json")
>  public void add(CallInfo callInfo) throws Exception {
>  }
>  @Command(name = "update", jsonSchema = "cluster.package.update.json")
>  public void update(CallInfo callInfo) throws Exception {
> }
>  @Command(name = "delete", jsonSchema = "cluster.package.delete.json")
>  boolean deletePackage(CallInfo params) throws Exception {
> }
> {code}
> This expects you to already have the API spec json 
>  
> The annotations are:
>  
> {code:java}
> @Retention(RetentionPolicy.RUNTIME)
> @Target({ElementType.TYPE})
> public @interface EndPoint {
> /**The suoported http methods*/
>   SolrRequest.METHOD[] method();
> /**supported paths*/
>   String[] path();
>   PermissionNameProvider.Name permission();
> }
> {code}
> {code:java}
> @Retention(RetentionPolicy.RUNTIME)
> @Target(ElementType.METHOD)
> public @interface Command {
>/**if this is not a json command , leave it empty.
>* Keep in mind that you cannot have duplicates.
>* Only one method per name
>*
>*/
>   String name() default "";
>   String jsonSchema() default "";
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13787) An annotation based system to write v2 only APIs

2019-10-03 Thread Noble Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13787:
--
Description: 
example v2 API may look as follows
{code:java}
@EndPoint(
 path = "/api/cluster/package",
 method = POST,
 permission = PKG_EDIT
)
static class PkgEdit {
 @Command(name = "add", jsonSchema = "cluster.package.add.json")
 public void add(CallInfo callInfo) throws Exception {


 }

 @Command(name = "update", jsonSchema = "cluster.package.update.json")
 public void update(CallInfo callInfo) throws Exception {
}

 @Command(name = "delete", jsonSchema = "cluster.package.delete.json")
 boolean deletePackage(CallInfo params) throws Exception {

}

{code}
This expects you to already have the API spec json 

 

The annotations are:

 
{code:java}
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
public @interface EndPoint {
/**The suoported http methods*/
  SolrRequest.METHOD[] method();

/**supported paths*/
  String[] path();

  PermissionNameProvider.Name permission();
}
{code}

{code:java}
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Command {
   /**if this is not a json command , leave it empty.
   * Keep in mind that you cannot have duplicates.
   * Only one method per name
   *
   */
  String name() default "";

  String jsonSchema() default "";
}
{code}

  was:
example v2 API may look as follows
{code:java}
@EndPoint(
 path = "/api/cluster/package",
 method = POST,
 permission = PKG_EDIT
)
static class PkgEdit {
 @Command(name = "add", jsonSchema = "cluster.package.add.json")
 public void add(CallInfo callInfo) throws Exception {


 }

 @Command(name = "update", jsonSchema = "cluster.package.update.json")
 public void update(CallInfo callInfo) throws Exception {
}

 @Command(name = "delete", jsonSchema = "cluster.package.delete.json")
 boolean deletePackage(CallInfo params) throws Exception {

}

{code}
This expects you to already have the API spec json 

 

The annotations are:

 
{code:java}
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
public @interface EndPoint {
/**The suoported http methods*/
  SolrRequest.METHOD[] method();

/**supported paths*/
  String[] path();

  PermissionNameProvider.Name permission();
}
{code}

{code:java}
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Command {
   /**if this is not a json command , leave it empty.
   * Keep in mind that you cannot have duplicates.
   * Only one method per name
   *
   */
  String name() default "";

  String commandSchemaFile() default "";
}
{code}


> An annotation based system to write v2 only APIs
> 
>
> Key: SOLR-13787
> URL: https://issues.apache.org/jira/browse/SOLR-13787
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> example v2 API may look as follows
> {code:java}
> @EndPoint(
>  path = "/api/cluster/package",
>  method = POST,
>  permission = PKG_EDIT
> )
> static class PkgEdit {
>  @Command(name = "add", jsonSchema = "cluster.package.add.json")
>  public void add(CallInfo callInfo) throws Exception {
>  }
>  @Command(name = "update", jsonSchema = "cluster.package.update.json")
>  public void update(CallInfo callInfo) throws Exception {
> }
>  @Command(name = "delete", jsonSchema = "cluster.package.delete.json")
>  boolean deletePackage(CallInfo params) throws Exception {
> }
> {code}
> This expects you to already have the API spec json 
>  
> The annotations are:
>  
> {code:java}
> @Retention(RetentionPolicy.RUNTIME)
> @Target({ElementType.TYPE})
> public @interface EndPoint {
> /**The suoported http methods*/
>   SolrRequest.METHOD[] method();
> /**supported paths*/
>   String[] path();
>   PermissionNameProvider.Name permission();
> }
> {code}
> {code:java}
> @Retention(RetentionPolicy.RUNTIME)
> @Target(ElementType.METHOD)
> public @interface Command {
>/**if this is not a json command , leave it empty.
>* Keep in mind that you cannot have duplicates.
>* Only one method per name
>*
>*/
>   String name() default "";
>   String jsonSchema() default "";
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13787) An annotation based system to write v2 only APIs

2019-10-03 Thread Noble Paul (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-13787:
--
Description: 
example v2 API may look as follows
{code:java}
@EndPoint(
 path = "/api/cluster/package",
 method = POST,
 permission = PKG_EDIT
)
static class PkgEdit {
 @Command(name = "add", jsonSchema = "cluster.package.add.json")
 public void add(CallInfo callInfo) throws Exception {


 }

 @Command(name = "update", jsonSchema = "cluster.package.update.json")
 public void update(CallInfo callInfo) throws Exception {
}

 @Command(name = "delete", jsonSchema = "cluster.package.delete.json")
 boolean deletePackage(CallInfo params) throws Exception {

}

{code}
This expects you to already have the API spec json 

 

The annotations are:

 
{code:java}
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
public @interface EndPoint {
/**The suoported http methods*/
  SolrRequest.METHOD[] method();

/**supported paths*/
  String[] path();

  PermissionNameProvider.Name permission();
}
{code}

{code:java}
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Command {
   /**if this is not a json command , leave it empty.
   * Keep in mind that you cannot have duplicates.
   * Only one method per name
   *
   */
  String name() default "";

  String commandSchemaFile() default "";
}
{code}

  was:
example v2 API may look as follows
{code:java}
@EndPoint(
 spec = "cluster.package",
 method = POST,
 permission = PKG_EDIT
)
static class PkgEdit {
 @Command(name = "add")
 public void add(CallInfo callInfo) throws Exception {


 }

 @Command(name = "update")
 public void update(CallInfo callInfo) throws Exception {
}

 @Command(name = "delete")
 boolean deletePackage(CallInfo params) throws Exception {

}

{code}
This expects you to already have the API spec json 

 

The annotations are:

 
{code:java}
@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE})
public @interface EndPoint {
  /**name of the API spec file without the '.json' suffix
   */
  String spec();

  /**Http method
   */
  SolrRequest.METHOD method();

  /**The well known persmission name if any
   */
  PermissionNameProvider.Name permission();
}
{code}
{code:java}
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface Command {
  String name() default "";
}
{code}


> An annotation based system to write v2 only APIs
> 
>
> Key: SOLR-13787
> URL: https://issues.apache.org/jira/browse/SOLR-13787
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> example v2 API may look as follows
> {code:java}
> @EndPoint(
>  path = "/api/cluster/package",
>  method = POST,
>  permission = PKG_EDIT
> )
> static class PkgEdit {
>  @Command(name = "add", jsonSchema = "cluster.package.add.json")
>  public void add(CallInfo callInfo) throws Exception {
>  }
>  @Command(name = "update", jsonSchema = "cluster.package.update.json")
>  public void update(CallInfo callInfo) throws Exception {
> }
>  @Command(name = "delete", jsonSchema = "cluster.package.delete.json")
>  boolean deletePackage(CallInfo params) throws Exception {
> }
> {code}
> This expects you to already have the API spec json 
>  
> The annotations are:
>  
> {code:java}
> @Retention(RetentionPolicy.RUNTIME)
> @Target({ElementType.TYPE})
> public @interface EndPoint {
> /**The suoported http methods*/
>   SolrRequest.METHOD[] method();
> /**supported paths*/
>   String[] path();
>   PermissionNameProvider.Name permission();
> }
> {code}
> {code:java}
> @Retention(RetentionPolicy.RUNTIME)
> @Target(ElementType.METHOD)
> public @interface Command {
>/**if this is not a json command , leave it empty.
>* Keep in mind that you cannot have duplicates.
>* Only one method per name
>*
>*/
>   String name() default "";
>   String commandSchemaFile() default "";
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] TheSench opened a new pull request #921: SOLR-13816: Move eDisMax params from private interface to public constants

2019-10-03 Thread GitBox
TheSench opened a new pull request #921: SOLR-13816: Move eDisMax params from 
private interface to public constants
URL: https://github.com/apache/lucene-solr/pull/921
 
 
   # Description
   
   `DisMaxParams` contains many eDisMax query string parameters and makes them 
publicly available so that consuming code does not need to rely on "magic 
strings".  Several of these parameters are missing (but are currently being 
supplied to the `ExtendedDismaxQParser` by a private interface).
   
   # Solution
   
   Move the missing parameters from the private static interface `DMP` into the 
`DixMaxParams` class so that they are publicly accessible to consuming code.
   
   # Tests
   
   None.  This is simply an implementation detail - it is moving the definition 
of 4 static strings up one layer in a class hierarchy.
   
   # Checklist
   
   Please review the following and check all that apply:
   
   - [x] I have reviewed the guidelines for [How to 
Contribute](https://wiki.apache.org/solr/HowToContribute) and my code conforms 
to the standards described there to the best of my ability.
   - [x] I have created a Jira issue and added the issue ID to my pull request 
title.
   - [x] I am authorized to contribute this code to the ASF and have removed 
any code I do not have a license to distribute.
   - [x] I have given Solr maintainers 
[access](https://help.github.com/en/articles/allowing-changes-to-a-pull-request-branch-created-from-a-fork)
 to contribute to my PR branch. (optional but recommended)
   - [x] I have developed this patch against the `master` branch.
   - [ ] I have run `ant precommit` and the appropriate test suite.
   - [ ] I have added tests for my changes.
   - [ ] I have added documentation for the [Ref 
Guide](https://github.com/apache/lucene-solr/tree/master/solr/solr-ref-guide) 
(for Solr changes only).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13816) eDismax: Missing Param Constants

2019-10-03 Thread Jonathan J Senchyna (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944176#comment-16944176
 ] 

Jonathan J Senchyna commented on SOLR-13816:


{\{sow}} is technically defined in \{{QueryParsing.java}}.  The other three of 
these are defined in a private static interface \{{DMP}} within 
\{{ExtendedDismaxQParser.java}}.

> eDismax: Missing Param Constants
> 
>
> Key: SOLR-13816
> URL: https://issues.apache.org/jira/browse/SOLR-13816
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Reporter: Jonathan J Senchyna
>Priority: Minor
>
> Currently, DisMaxParams contains a mix of constants for both the DisMax and 
> eDisMax query parsers; several of the eDisMax parameters are missing though.  
> This is needed to properly add support in Spring Data Solr.  Specifically, 
> constants for the following are not defined:
>  * sow
>  * lowercaseOperators
>  * stopwords
>  * uf



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-13816) eDismax: Missing Param Constants

2019-10-03 Thread Jonathan J Senchyna (Jira)
Jonathan J Senchyna created SOLR-13816:
--

 Summary: eDismax: Missing Param Constants
 Key: SOLR-13816
 URL: https://issues.apache.org/jira/browse/SOLR-13816
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: query parsers
Reporter: Jonathan J Senchyna


Currently, DisMaxParams contains a mix of constants for both the DisMax and 
eDisMax query parsers; several of the eDisMax parameters are missing though.  
This is needed to properly add support in Spring Data Solr.  Specifically, 
constants for the following are not defined:
 * sow
 * lowercaseOperators
 * stopwords
 * uf



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13796) Fix Solr Test Performance

2019-10-03 Thread Mark Miller (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13796?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944165#comment-16944165
 ] 

Mark Miller commented on SOLR-13796:


FYI, my method for speed up is simply to run a test with yourkit, find silly 
slow spots, address, repeat.

Will try and share soon -  amongst a few months of travel and time away :(

> Fix Solr Test Performance
> -
>
> Key: SOLR-13796
> URL: https://issues.apache.org/jira/browse/SOLR-13796
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> I had kind of forgotten, but while working on Starburst I had realized that 
> almost all of our tests are capable of being very fast and logging 10x less 
> as a result. When they get this fast, a lot of infrequent random fails become 
> frequent and things become much easier to debug. I had fixed a lot of issue 
> to make tests pretty damn fast in the starburst branch, but tons of tests 
> where still ignored due to the scope of changes going on.
> A variety of things have converged that have allowed me to absorb most of 
> that work and build up on it while also almost finishing it.
> This will be another huge PR aimed at addressing issues that have our tests 
> often take dozens of seconds to minutes when they should take mere seconds or 
> 10.
> As part of this issue, I would like to move the focus of non nightly tests 
> towards being more minimal, consistent and fast.
> In exchanged, we must put more effort and care in nightly tests. Not 
> something that happens now, but if we have solid, fast, consistent non 
> Nightly tests, that should open up some room for Nightly to get some status 
> boost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8999) expectThrows doesn't play nicely with "assume" failures

2019-10-03 Thread Chris M. Hostetter (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris M. Hostetter updated LUCENE-8999:
---
Attachment: LUCENE-8999.patch
Status: Open  (was: Open)

Attaching a patch where i started pursuing this idea – not just for 
AssumptionViolatedException but also for AssertionError, so that if someone has 
an {{assertEquals(...)}} that fails somewhere down the stack inside their 
{{expectThrows(Foo.class, ...)}} call, the test will fail with _that_ message, 
not one that says {{"Unexpected exception type, expected Foo but got 
AssertionError"}}

The patch also handles the possibility that the caller may explicitly pass 
{{AssumptionViolatedException.class}} or {{AssertionError.class}} to 
{{expectThrows(...)}} (something our tests do a surprisingly non-zero number of 
times) and it still does what the caller would expect: returning the caught 
AssumptionViolatedException/AssertionError instead of re-throwing it

I haven't run the full test suite yet, but so far it seems to work well ... 
what do folks think?

/cc [~dawid.weiss], [~rjernst], [~munendrasn], [~gerlowskija]

> expectThrows doesn't play nicely with "assume" failures
> ---
>
> Key: LUCENE-8999
> URL: https://issues.apache.org/jira/browse/LUCENE-8999
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: LUCENE-8999.patch
>
>
> Once upon a time, {{TestRunWithRestrictedPermissions}} use to have test 
> methods that looked like this...
> {code:java}
> try {
>   runWithRestrictedPermissions(this::doSomeForbiddenStuff);
>   fail("this should not pass!");
> } catch (SecurityException se) {
>   // pass
> }
> {code}
> LUCENE-8938 changed this code to look like this...
> {code:java}
> expectThrows(SecurityException.class, () -> 
> runWithRestrictedPermissions(this::doSomeForbiddenStuff));
> {code}
> But a nuance of the existing code that isn't captured in the new code is that 
> {{runWithRestrictedPermissions(...)}} explicitly uses {{assumeTrue(..., 
> System.getSecurityManager() != null)}} to ensure that if a security manager 
> is not in use, the test should be {{SKIPed}} and not considered a pass or a 
> fail.
> The key issue being that {{assumeTrue(...)}} (and other 'assume' related 
> methods like it) throws an {{AssumptionViolatedException}} when the condition 
> isn't met, expecting this to propagate up to the Test Runner.
> With the _old_ code this worked as expected - the 
> {{AssumptionViolatedException}} would abort execution before the 
> {{fail(...)}} but not be caught by the {{catch}} and bubble up all the way to 
> the test runner so the test would be recorded as a SKIP.
> With the new code, {{expectThrows()}} is catching the 
> {{AssumptionViolatedException}} and since it doesn't match the expected 
> {{SecurityException.class}} is generating a test failure instead...
> {noformat}
>[junit4] Suite: org.apache.lucene.util.TestRunWithRestrictedPermissions
>[junit4]   2> NOTE: download the large Jenkins line-docs file by running 
> 'ant get-jenkins-line-docs' in the lucene directory.
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestRunWithRestrictedPermissions 
> -Dtests.method=testCompletelyForbidden2 -Dtests.seed=4181E5FE9E84DBC4 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=luy -Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.10s J7  | 
> TestRunWithRestrictedPermissions.testCompletelyForbidden2 <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> Unexpected exception type, expected SecurityException but got 
> org.junit.AssumptionViolatedException: runWithRestrictedPermissions requires 
> a SecurityManager enabled
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([4181E5FE9E84DBC4:16509163A0E04B41]:0)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2729)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2718)
>[junit4]>at 
> org.apache.lucene.util.TestRunWithRestrictedPermissions.testCompletelyForbidden2(TestRunWithRestrictedPermissions.java:39)
>[junit4]>at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>at 
> java.base/java.lang.reflect.Method.in

[jira] [Updated] (LUCENE-8999) expectThrows doesn't play nicely with "assume" failures

2019-10-03 Thread Chris M. Hostetter (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris M. Hostetter updated LUCENE-8999:
---
Status: Patch Available  (was: Open)

> expectThrows doesn't play nicely with "assume" failures
> ---
>
> Key: LUCENE-8999
> URL: https://issues.apache.org/jira/browse/LUCENE-8999
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Chris M. Hostetter
>Priority: Major
> Attachments: LUCENE-8999.patch
>
>
> Once upon a time, {{TestRunWithRestrictedPermissions}} use to have test 
> methods that looked like this...
> {code:java}
> try {
>   runWithRestrictedPermissions(this::doSomeForbiddenStuff);
>   fail("this should not pass!");
> } catch (SecurityException se) {
>   // pass
> }
> {code}
> LUCENE-8938 changed this code to look like this...
> {code:java}
> expectThrows(SecurityException.class, () -> 
> runWithRestrictedPermissions(this::doSomeForbiddenStuff));
> {code}
> But a nuance of the existing code that isn't captured in the new code is that 
> {{runWithRestrictedPermissions(...)}} explicitly uses {{assumeTrue(..., 
> System.getSecurityManager() != null)}} to ensure that if a security manager 
> is not in use, the test should be {{SKIPed}} and not considered a pass or a 
> fail.
> The key issue being that {{assumeTrue(...)}} (and other 'assume' related 
> methods like it) throws an {{AssumptionViolatedException}} when the condition 
> isn't met, expecting this to propagate up to the Test Runner.
> With the _old_ code this worked as expected - the 
> {{AssumptionViolatedException}} would abort execution before the 
> {{fail(...)}} but not be caught by the {{catch}} and bubble up all the way to 
> the test runner so the test would be recorded as a SKIP.
> With the new code, {{expectThrows()}} is catching the 
> {{AssumptionViolatedException}} and since it doesn't match the expected 
> {{SecurityException.class}} is generating a test failure instead...
> {noformat}
>[junit4] Suite: org.apache.lucene.util.TestRunWithRestrictedPermissions
>[junit4]   2> NOTE: download the large Jenkins line-docs file by running 
> 'ant get-jenkins-line-docs' in the lucene directory.
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestRunWithRestrictedPermissions 
> -Dtests.method=testCompletelyForbidden2 -Dtests.seed=4181E5FE9E84DBC4 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=luy -Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.10s J7  | 
> TestRunWithRestrictedPermissions.testCompletelyForbidden2 <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> Unexpected exception type, expected SecurityException but got 
> org.junit.AssumptionViolatedException: runWithRestrictedPermissions requires 
> a SecurityManager enabled
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([4181E5FE9E84DBC4:16509163A0E04B41]:0)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2729)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2718)
>[junit4]>at 
> org.apache.lucene.util.TestRunWithRestrictedPermissions.testCompletelyForbidden2(TestRunWithRestrictedPermissions.java:39)
>[junit4]>at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
>[junit4]>at java.base/java.lang.Thread.run(Thread.java:834)
>[junit4]> Caused by: org.junit.AssumptionViolatedException: 
> runWithRestrictedPermissions requires a SecurityManager enabled
>[junit4]>at 
> com.carrotsearch.randomizedtesting.RandomizedTest.assumeTrue(RandomizedTest.java:725)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.assumeTrue(LuceneTestCase.java:873)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.runWithRestrictedPermissions(LuceneTestCase.java:2917)
>[junit4]>at 
> org.apache.lucene.util.TestRunWithRestrictedPermissions.lambda$testCompletelyForbidden2$2(TestRunWithRestrictedPermissions.java:40)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2724)
>[junit4]>... 37 more
> {noformat}
> 
> While there might be easy fixes that cou

[jira] [Commented] (SOLR-13812) SolrTestCaseJ4.params(String...) javadocs, uneven rejection, basic test coverage

2019-10-03 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16944075#comment-16944075
 ] 

Lucene/Solr QA commented on SOLR-13812:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 86m  
8s{color} | {color:green} core in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} test-framework in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13812 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12982107/SOLR-13812.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / a0396da |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/565/testReport/ |
| modules | C: solr/core solr/test-framework U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/565/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> SolrTestCaseJ4.params(String...) javadocs, uneven rejection, basic test 
> coverage
> 
>
> Key: SOLR-13812
> URL: https://issues.apache.org/jira/browse/SOLR-13812
> Project: Solr
>  Issue Type: Test
>Reporter: Diego Ceccarelli
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-13812.patch, SOLR-13812.patch
>
>
> In 
> https://github.com/apache/lucene-solr/commit/4fedd7bd77219223cb09a660a3e2ce0e89c26eea#diff-21d4224105244d0fb50fe7e586a8495d
>  on https://github.com/apache/lucene-solr/pull/300 for SOLR-11831 
> [~diegoceccarelli] proposes to add javadocs and uneven length parameter 
> rejection for the {{SolrTestCaseJ4.params(String...)}} method.
> This ticket proposes to do that plus to also add basic test coverage for the 
> method, separately from the unrelated SOLR-11831 changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943830#comment-16943830
 ] 

ASF subversion and git services commented on SOLR-8241:
---

Commit a0396da64b5874886a801f22b7cb81e11ed9642a in lucene-solr's branch 
refs/heads/master from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=a0396da ]

SOLR-8241: Fix an NPE.


> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Ben Manes
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: EvictionBenchmark.png, GetPutBenchmark.png, 
> SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, 
> SOLR-8241.patch, SOLR-8241.patch, caffeine-benchmark.txt, proposal.patch, 
> solr_caffeine.patch.gz, solr_jmh_results.json
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #916: LUCENE-8213: Asynchronous Caching in LRUQueryCache

2019-10-03 Thread GitBox
jpountz commented on a change in pull request #916: LUCENE-8213: Asynchronous 
Caching in LRUQueryCache
URL: https://github.com/apache/lucene-solr/pull/916#discussion_r331190828
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/LRUQueryCache.java
 ##
 @@ -88,13 +92,47 @@
  * @lucene.experimental
  */
 public class LRUQueryCache implements QueryCache, Accountable {
+  /** Act as key for the inflight queries map */
+  private static class MapKey {
+private final Query query;
+private final IndexReader.CacheKey cacheKey;
+
+public MapKey(Query query, IndexReader.CacheKey cacheKey) {
+  this.query = query;
+  this.cacheKey = cacheKey;
+}
+
+public Query getQuery() {
+  return query;
+}
+
+public IndexReader.CacheKey getCacheKey() {
+  return cacheKey;
+}
+
+@Override
+public int hashCode() { return query.hashCode() ^ cacheKey.hashCode(); }
 
 Review comment:
   Can you use the usual polynomial formula `h0 + 31 * (h1 + 31 * ( ... ))`. 
The use of `^` is fine here, but it is a source of performance bugs in some 
scenarios. For instance Java's AbstractMap.Entry does this, which means that 
whenever you map a key to itself, the hashcode is 0 (because both hashcodes are 
equal). This is a lot of collisions if you have lots of entries that have the 
same key and value. So I prefer avoiding this pattern.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #916: LUCENE-8213: Asynchronous Caching in LRUQueryCache

2019-10-03 Thread GitBox
jpountz commented on a change in pull request #916: LUCENE-8213: Asynchronous 
Caching in LRUQueryCache
URL: https://github.com/apache/lucene-solr/pull/916#discussion_r331191644
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/LRUQueryCache.java
 ##
 @@ -732,8 +821,24 @@ public ScorerSupplier scorerSupplier(LeafReaderContext 
context) throws IOExcepti
 
   if (docIdSet == null) {
 if (policy.shouldCache(in.getQuery())) {
-  docIdSet = cache(context);
-  putIfAbsent(in.getQuery(), docIdSet, cacheHelper);
+  boolean cacheSynchronously = executor == null;
+
+  // If asynchronous caching is requested, perform the same and return
+  // the uncached iterator
+  if (cacheSynchronously == false) {
+cacheSynchronously = cacheAsynchronously(context, cacheHelper);
+
+// If async caching failed, synchronous caching will
+// be performed, hence do not return the uncached value
+if (cacheSynchronously == false) {
+  return in.scorerSupplier(context);
+}
+  }
+
+  if (cacheSynchronously) {
 
 Review comment:
   Thanks I had missed it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #916: LUCENE-8213: Asynchronous Caching in LRUQueryCache

2019-10-03 Thread GitBox
jpountz commented on a change in pull request #916: LUCENE-8213: Asynchronous 
Caching in LRUQueryCache
URL: https://github.com/apache/lucene-solr/pull/916#discussion_r331191303
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/LRUQueryCache.java
 ##
 @@ -656,10 +713,21 @@ public long ramBytesUsed() {
 // threads when IndexSearcher is created with threads
 private final AtomicBoolean used;
 
+private final Executor executor;
+
 CachingWrapperWeight(Weight in, QueryCachingPolicy policy) {
   super(in.getQuery(), 1f);
   this.in = in;
   this.policy = policy;
+  this.executor = null;
+  used = new AtomicBoolean(false);
+}
+
+CachingWrapperWeight(Weight in, QueryCachingPolicy policy, Executor 
executor) {
+  super(in.getQuery(), 1f);
+  this.in = in;
+  this.policy = policy;
+  this.executor = executor;
   used = new AtomicBoolean(false);
 }
 
 Review comment:
   nit: can we have a single constructor? The first one looks unused?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #916: LUCENE-8213: Asynchronous Caching in LRUQueryCache

2019-10-03 Thread GitBox
jpountz commented on a change in pull request #916: LUCENE-8213: Asynchronous 
Caching in LRUQueryCache
URL: https://github.com/apache/lucene-solr/pull/916#discussion_r331192596
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/search/LRUQueryCache.java
 ##
 @@ -832,5 +931,43 @@ public BulkScorer bulkScorer(LeafReaderContext context) 
throws IOException {
   return new DefaultBulkScorer(new ConstantScoreScorer(this, 0f, 
ScoreMode.COMPLETE_NO_SCORES, disi));
 }
 
+// Perform a cache load asynchronously
+// @return true if synchronous caching is needed, false otherwise
+private boolean cacheAsynchronously(LeafReaderContext context, 
IndexReader.CacheHelper cacheHelper) {
+  /*
+   * If the current query is already being asynchronously cached,
+   * do not trigger another cache operation
+   */
+  if (inFlightAsyncLoadQueries.add(new MapKey(in.getQuery(),
+  cacheHelper.getKey())) == false) {
+return false;
+  }
+
+  FutureTask task = new FutureTask<>(() -> {
+try {
+  DocIdSet localDocIdSet = cache(context);
+  putIfAbsent(in.getQuery(), localDocIdSet, cacheHelper);
+} finally {
+  // Remove the key from inflight
+  Object retValue = inFlightAsyncLoadQueries.remove(new 
MapKey(in.getQuery(), cacheHelper.getKey()));
 
 Review comment:
   this call is protected by no lock while `inFlightAsyncLoadQueries` is not 
concurrent?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #905: LUCENE-8990: Add estimateDocCount(visitor) method to PointValues

2019-10-03 Thread GitBox
jpountz commented on a change in pull request #905: LUCENE-8990: Add 
estimateDocCount(visitor) method to PointValues
URL: https://github.com/apache/lucene-solr/pull/905#discussion_r331178088
 
 

 ##
 File path: 
lucene/core/src/test/org/apache/lucene/codecs/lucene60/TestLucene60PointsFormat.java
 ##
 @@ -385,12 +298,102 @@ public Relation compare(byte[] minPackedValue, byte[] 
maxPackedValue) {
 }
 return Relation.CELL_CROSSES_QUERY;
   }
-});
+};
+// If only one point matches, then the point count is 
(actualMaxPointsInLeafNode + 1) / 2
+// in general, or maybe 2x that if the point is a split value
+final long pointCount = points.estimatePointCount(onePointMatchVisitor);
+assertTrue(""+pointCount,
+pointCount == (actualMaxPointsInLeafNode + 1) / 2 || // common case
+pointCount == 2*((actualMaxPointsInLeafNode + 1) / 2)); // if the 
point is a split value
 
-assertTrue(""+docCount,
-docCount == Math.ceil(((actualMaxPointsInLeafNode + 1) / 2) / 
pointsPerDocument) || // common case
-docCount == Math.ceil((2*((actualMaxPointsInLeafNode + 1) / 2)) / 
pointsPerDocument)); // if the point is a split value
+final long docCount = points.estimateDocCount(onePointMatchVisitor);
+if (multiValues) {
+  assertEquals(docCount, (long) (docCount * (1d - Math.pow( (numDocs -  
pointCount) / points.size() , points.size() / docCount;
+} else {
+  assertEquals(pointCount, docCount);
+}
 r.close();
 dir.close();
   }
+
+  public void testDocCountEdgeCases() {
+PointValues values = getPointValues(Long.MAX_VALUE, 1, Long.MAX_VALUE);
+long docs = values.estimateDocCount(null);
+assertEquals(1, docs);
+values = getPointValues(Long.MAX_VALUE, 1, 1);
+docs = values.estimateDocCount(null);
+assertEquals(1, docs);
+values = getPointValues(Long.MAX_VALUE, Integer.MAX_VALUE, Long.MAX_VALUE);
+docs = values.estimateDocCount(null);
+assertEquals(Integer.MAX_VALUE, docs);
+values = getPointValues(Long.MAX_VALUE, Integer.MAX_VALUE, Long.MAX_VALUE 
/ 2);
+docs = values.estimateDocCount(null);
+assertEquals(Integer.MAX_VALUE, docs);
+values = getPointValues(Long.MAX_VALUE, Integer.MAX_VALUE, 1);
+docs = values.estimateDocCount(null);
+assertEquals(1, docs);
+  }
+
+  public void testRandomDocCount() {
+for (int i = 0; i < 100; i++) {
+  long size = TestUtil.nextLong(random(), 1, Long.MAX_VALUE);
+  int maxDoc = (size > Integer.MAX_VALUE) ? Integer.MAX_VALUE : 
Math.toIntExact(size);
+  int docCount = TestUtil.nextInt(random(), 1, maxDoc);
+  long estimatePointCount = TestUtil.nextLong(random(), 0, size);
+  PointValues values = getPointValues(size, docCount, estimatePointCount);
+  long docs = values.estimateDocCount(null);
+  assertTrue(docs <= estimatePointCount);
+  assertTrue(docs <= maxDoc);
 
 Review comment:
   maybe also assert that `docs >= estimatedPointCount / (size/docCount)`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on a change in pull request #905: LUCENE-8990: Add estimateDocCount(visitor) method to PointValues

2019-10-03 Thread GitBox
jpountz commented on a change in pull request #905: LUCENE-8990: Add 
estimateDocCount(visitor) method to PointValues
URL: https://github.com/apache/lucene-solr/pull/905#discussion_r33490
 
 

 ##
 File path: lucene/core/src/java/org/apache/lucene/index/PointValues.java
 ##
 @@ -241,9 +241,28 @@ default void visit(DocIdSetIterator iterator, byte[] 
packedValue) throws IOExcep
* than {@link #intersect(IntersectVisitor)}.
* @see DocIdSetIterator#cost */
   public long estimateDocCount(IntersectVisitor visitor) {
-return (long) Math.ceil(estimatePointCount(visitor) / ((double) size() / 
getDocCount()));
+long estimatedPointCount = estimatePointCount(visitor);
+int docCount = getDocCount();
+double size = size();
+if (estimatedPointCount >= size) {
+  // math all docs
+  return docCount;
+} else if (size == docCount || estimatedPointCount == 0L ) {
+  // if the point count estimate is 0 or we have only single values
+  // return this estimate
+  return  estimatedPointCount;
+} else {
+  // in case of multi values estimate the number of docs using the 
solution provided in
+  // 
https://math.stackexchange.com/questions/1175295/urn-problem-probability-of-drawing-balls-of-k-unique-colors
+  // then approximate the solution for points per doc << size() which 
results in the expression
+  // D * (1 - ((N - n) / N)^(N/D))
 
 Review comment:
   maybe clarify what are D, N and n?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8999) expectThrows doesn't play nicely with "assume" failures

2019-10-03 Thread Chris M. Hostetter (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943798#comment-16943798
 ] 

Chris M. Hostetter commented on LUCENE-8999:


strawman: should {{expectThrows(...)}} explicitly {{catch 
(AssumptionViolatedException ae) ...}} and immediately re-throw?

> expectThrows doesn't play nicely with "assume" failures
> ---
>
> Key: LUCENE-8999
> URL: https://issues.apache.org/jira/browse/LUCENE-8999
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Chris M. Hostetter
>Priority: Major
>
> Once upon a time, {{TestRunWithRestrictedPermissions}} use to have test 
> methods that looked like this...
> {code:java}
> try {
>   runWithRestrictedPermissions(this::doSomeForbiddenStuff);
>   fail("this should not pass!");
> } catch (SecurityException se) {
>   // pass
> }
> {code}
> LUCENE-8938 changed this code to look like this...
> {code:java}
> expectThrows(SecurityException.class, () -> 
> runWithRestrictedPermissions(this::doSomeForbiddenStuff));
> {code}
> But a nuance of the existing code that isn't captured in the new code is that 
> {{runWithRestrictedPermissions(...)}} explicitly uses {{assumeTrue(..., 
> System.getSecurityManager() != null)}} to ensure that if a security manager 
> is not in use, the test should be {{SKIPed}} and not considered a pass or a 
> fail.
> The key issue being that {{assumeTrue(...)}} (and other 'assume' related 
> methods like it) throws an {{AssumptionViolatedException}} when the condition 
> isn't met, expecting this to propagate up to the Test Runner.
> With the _old_ code this worked as expected - the 
> {{AssumptionViolatedException}} would abort execution before the 
> {{fail(...)}} but not be caught by the {{catch}} and bubble up all the way to 
> the test runner so the test would be recorded as a SKIP.
> With the new code, {{expectThrows()}} is catching the 
> {{AssumptionViolatedException}} and since it doesn't match the expected 
> {{SecurityException.class}} is generating a test failure instead...
> {noformat}
>[junit4] Suite: org.apache.lucene.util.TestRunWithRestrictedPermissions
>[junit4]   2> NOTE: download the large Jenkins line-docs file by running 
> 'ant get-jenkins-line-docs' in the lucene directory.
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestRunWithRestrictedPermissions 
> -Dtests.method=testCompletelyForbidden2 -Dtests.seed=4181E5FE9E84DBC4 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=luy -Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 0.10s J7  | 
> TestRunWithRestrictedPermissions.testCompletelyForbidden2 <<<
>[junit4]> Throwable #1: junit.framework.AssertionFailedError: 
> Unexpected exception type, expected SecurityException but got 
> org.junit.AssumptionViolatedException: runWithRestrictedPermissions requires 
> a SecurityManager enabled
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([4181E5FE9E84DBC4:16509163A0E04B41]:0)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2729)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2718)
>[junit4]>at 
> org.apache.lucene.util.TestRunWithRestrictedPermissions.testCompletelyForbidden2(TestRunWithRestrictedPermissions.java:39)
>[junit4]>at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>at 
> java.base/java.lang.reflect.Method.invoke(Method.java:566)
>[junit4]>at java.base/java.lang.Thread.run(Thread.java:834)
>[junit4]> Caused by: org.junit.AssumptionViolatedException: 
> runWithRestrictedPermissions requires a SecurityManager enabled
>[junit4]>at 
> com.carrotsearch.randomizedtesting.RandomizedTest.assumeTrue(RandomizedTest.java:725)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.assumeTrue(LuceneTestCase.java:873)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.runWithRestrictedPermissions(LuceneTestCase.java:2917)
>[junit4]>at 
> org.apache.lucene.util.TestRunWithRestrictedPermissions.lambda$testCompletelyForbidden2$2(TestRunWithRestrictedPermissions.java:40)
>[junit4]>at 
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2724)
> 

[jira] [Created] (LUCENE-8999) expectThrows doesn't play nicely with "assume" failures

2019-10-03 Thread Chris M. Hostetter (Jira)
Chris M. Hostetter created LUCENE-8999:
--

 Summary: expectThrows doesn't play nicely with "assume" failures
 Key: LUCENE-8999
 URL: https://issues.apache.org/jira/browse/LUCENE-8999
 Project: Lucene - Core
  Issue Type: Test
Reporter: Chris M. Hostetter


Once upon a time, {{TestRunWithRestrictedPermissions}} use to have test methods 
that looked like this...
{code:java}
try {
  runWithRestrictedPermissions(this::doSomeForbiddenStuff);
  fail("this should not pass!");
} catch (SecurityException se) {
  // pass
}
{code}
LUCENE-8938 changed this code to look like this...
{code:java}
expectThrows(SecurityException.class, () -> 
runWithRestrictedPermissions(this::doSomeForbiddenStuff));
{code}
But a nuance of the existing code that isn't captured in the new code is that 
{{runWithRestrictedPermissions(...)}} explicitly uses {{assumeTrue(..., 
System.getSecurityManager() != null)}} to ensure that if a security manager is 
not in use, the test should be {{SKIPed}} and not considered a pass or a fail.

The key issue being that {{assumeTrue(...)}} (and other 'assume' related 
methods like it) throws an {{AssumptionViolatedException}} when the condition 
isn't met, expecting this to propagate up to the Test Runner.

With the _old_ code this worked as expected - the 
{{AssumptionViolatedException}} would abort execution before the {{fail(...)}} 
but not be caught by the {{catch}} and bubble up all the way to the test runner 
so the test would be recorded as a SKIP.

With the new code, {{expectThrows()}} is catching the 
{{AssumptionViolatedException}} and since it doesn't match the expected 
{{SecurityException.class}} is generating a test failure instead...
{noformat}
   [junit4] Suite: org.apache.lucene.util.TestRunWithRestrictedPermissions
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestRunWithRestrictedPermissions 
-Dtests.method=testCompletelyForbidden2 -Dtests.seed=4181E5FE9E84DBC4 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=luy -Dtests.timezone=Etc/GMT-7 -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.10s J7  | 
TestRunWithRestrictedPermissions.testCompletelyForbidden2 <<<
   [junit4]> Throwable #1: junit.framework.AssertionFailedError: Unexpected 
exception type, expected SecurityException but got 
org.junit.AssumptionViolatedException: runWithRestrictedPermissions requires a 
SecurityManager enabled
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([4181E5FE9E84DBC4:16509163A0E04B41]:0)
   [junit4]>at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2729)
   [junit4]>at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2718)
   [junit4]>at 
org.apache.lucene.util.TestRunWithRestrictedPermissions.testCompletelyForbidden2(TestRunWithRestrictedPermissions.java:39)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]>at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]>at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
   [junit4]>at java.base/java.lang.Thread.run(Thread.java:834)
   [junit4]> Caused by: org.junit.AssumptionViolatedException: 
runWithRestrictedPermissions requires a SecurityManager enabled
   [junit4]>at 
com.carrotsearch.randomizedtesting.RandomizedTest.assumeTrue(RandomizedTest.java:725)
   [junit4]>at 
org.apache.lucene.util.LuceneTestCase.assumeTrue(LuceneTestCase.java:873)
   [junit4]>at 
org.apache.lucene.util.LuceneTestCase.runWithRestrictedPermissions(LuceneTestCase.java:2917)
   [junit4]>at 
org.apache.lucene.util.TestRunWithRestrictedPermissions.lambda$testCompletelyForbidden2$2(TestRunWithRestrictedPermissions.java:40)
   [junit4]>at 
org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2724)
   [junit4]>... 37 more
{noformat}

While there might be easy fixes that could be made explicitly to 
{{TestRunWithRestrictedPermissions}} to deal with this particular problem, it 
seems like perhaps we should consider changes to better deal with this _type_ 
of problem that might exist elsewhere or occur in the future?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For

[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2019-10-03 Thread Chris M. Hostetter (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943774#comment-16943774
 ] 

Chris M. Hostetter commented on SOLR-8241:
--

this seems to have broken {{SolrInfoBeanTest.testCallMBeanInfo}} regardless of 
seed (at least on linux)...

>From jenkins: thetaphi_Lucene-Solr-master-Linux_24858.log.txt
{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=SolrInfoBeanTest 
-Dtests.method=testCallMBeanInfo -Dtests.seed=A6CF2477E5B0DBBA 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=kk-KZ 
-Dtests.timezone=Africa/Ndjamena -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 0.21s J0 | SolrInfoBeanTest.testCallMBeanInfo <<<
   [junit4]> Throwable #1: java.lang.AssertionError: 
org.apache.solr.search.CaffeineCache
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([A6CF2477E5B0DBBA:59A9A94B8EC8A6A4]:0)
   [junit4]>at 
org.apache.solr.SolrInfoBeanTest.testCallMBeanInfo(SolrInfoBeanTest.java:73)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]>at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]>at 
java.base/java.lang.reflect.Method.invoke(Method.java:567)
   [junit4]>at java.base/java.lang.Thread.run(Thread.java:830)
{noformat}

...jenkins found that failure on java13, i can reproduce it (again, with any 
seed) on java11.


> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Ben Manes
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: EvictionBenchmark.png, GetPutBenchmark.png, 
> SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, 
> SOLR-8241.patch, SOLR-8241.patch, caffeine-benchmark.txt, proposal.patch, 
> solr_caffeine.patch.gz, solr_jmh_results.json
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12217) Add support for shards.preference in single shard cases

2019-10-03 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943767#comment-16943767
 ] 

Erick Erickson commented on SOLR-12217:
---

[~tflobbe] Do you have any wild guesses how much effort this would take?
Thanks,
Erick

> Add support for shards.preference in single shard cases
> ---
>
> Key: SOLR-12217
> URL: https://issues.apache.org/jira/browse/SOLR-12217
> Project: Solr
>  Issue Type: New Feature
>Reporter: Tomas Eduardo Fernandez Lobbe
>Priority: Major
>
> SOLR-11982 Added support for {{shards.preference}}, a way to define the 
> sorting of replicas within a shard by preference (replica types/location). 
> This only works on multi-shard cases. We should add support for the case of 
> single shards when using CloudSolrClient



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] jpountz commented on issue #899: LUCENE-8989: Allow IndexSearcher To Handle Rejected Execution

2019-10-03 Thread GitBox
jpountz commented on issue #899: LUCENE-8989: Allow IndexSearcher To Handle 
Rejected Execution
URL: https://github.com/apache/lucene-solr/pull/899#issuecomment-538019799
 
 
   Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13812) SolrTestCaseJ4.params(String...) javadocs, uneven rejection, basic test coverage

2019-10-03 Thread Christine Poerschke (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-13812:
---
Attachment: SOLR-13812.patch

> SolrTestCaseJ4.params(String...) javadocs, uneven rejection, basic test 
> coverage
> 
>
> Key: SOLR-13812
> URL: https://issues.apache.org/jira/browse/SOLR-13812
> Project: Solr
>  Issue Type: Test
>Reporter: Diego Ceccarelli
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-13812.patch, SOLR-13812.patch
>
>
> In 
> https://github.com/apache/lucene-solr/commit/4fedd7bd77219223cb09a660a3e2ce0e89c26eea#diff-21d4224105244d0fb50fe7e586a8495d
>  on https://github.com/apache/lucene-solr/pull/300 for SOLR-11831 
> [~diegoceccarelli] proposes to add javadocs and uneven length parameter 
> rejection for the {{SolrTestCaseJ4.params(String...)}} method.
> This ticket proposes to do that plus to also add basic test coverage for the 
> method, separately from the unrelated SOLR-11831 changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13812) SolrTestCaseJ4.params(String...) javadocs, uneven rejection, basic test coverage

2019-10-03 Thread Christine Poerschke (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943698#comment-16943698
 ] 

Christine Poerschke commented on SOLR-13812:


bq. ... use {{expectThrows()}}

Good idea, thanks [~munendrasn]! The try-catch does, as you say, allow for the 
message validation but the "Params length should be even" message isn't 
particularly interesting here. expectThrows is shorter and with it being 
shorter having an additional test for "params length 1" is more realistic too 
then.


> SolrTestCaseJ4.params(String...) javadocs, uneven rejection, basic test 
> coverage
> 
>
> Key: SOLR-13812
> URL: https://issues.apache.org/jira/browse/SOLR-13812
> Project: Solr
>  Issue Type: Test
>Reporter: Diego Ceccarelli
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-13812.patch
>
>
> In 
> https://github.com/apache/lucene-solr/commit/4fedd7bd77219223cb09a660a3e2ce0e89c26eea#diff-21d4224105244d0fb50fe7e586a8495d
>  on https://github.com/apache/lucene-solr/pull/300 for SOLR-11831 
> [~diegoceccarelli] proposes to add javadocs and uneven length parameter 
> rejection for the {{SolrTestCaseJ4.params(String...)}} method.
> This ticket proposes to do that plus to also add basic test coverage for the 
> method, separately from the unrelated SOLR-11831 changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13815) Live split can lose data

2019-10-03 Thread Yonik Seeley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943695#comment-16943695
 ] 

Yonik Seeley commented on SOLR-13815:
-

OK, I've confirmed that this test sometimes fails on master (normally a single 
doc missing.)
The test itself is pretty simple, so this may be a real bug that is relatively 
easy to trigger.  I didn't see any exceptions in the failing test run either 
(before the failed assertion)
Next step is to enhance the test to figure out what documents are missing to 
aid in debugging.

> Live split can lose data
> 
>
> Key: SOLR-13815
> URL: https://issues.apache.org/jira/browse/SOLR-13815
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Yonik Seeley
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This issue is to investigate potential data loss during a "live" split (i.e. 
> split happens while updates are flowing)
> This was discovered during the shared storage work which was based on a 
> non-release branch_8x sometime before 8.3, hence the first steps are to try 
> and reproduce on the master branch without any shared storage changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] yonik opened a new pull request #920: SOLR-13815: add simple live split test to help debugging possible issue

2019-10-03 Thread GitBox
yonik opened a new pull request #920: SOLR-13815: add simple live split test to 
help debugging possible issue
URL: https://github.com/apache/lucene-solr/pull/920
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-13815) Live split can lose data

2019-10-03 Thread Yonik Seeley (Jira)
Yonik Seeley created SOLR-13815:
---

 Summary: Live split can lose data
 Key: SOLR-13815
 URL: https://issues.apache.org/jira/browse/SOLR-13815
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Yonik Seeley


This issue is to investigate potential data loss during a "live" split (i.e. 
split happens while updates are flowing)

This was discovered during the shared storage work which was based on a 
non-release branch_8x sometime before 8.3, hence the first steps are to try and 
reproduce on the master branch without any shared storage changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11155) /analysis/field and /analysis/document requests should support points fields

2019-10-03 Thread Alessandro Benedetti (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943631#comment-16943631
 ] 

Alessandro Benedetti commented on SOLR-11155:
-

{code:java}
/** Given the readable value, return the term value that will match it. */
 public String readableToIndexed(String val) {
 return toInternal(val);
 }
{code}

In case of PointField this will fail, inviting to use the toInternalByteRef, 
shouldn't this be fixed as well?

> /analysis/field and /analysis/document requests should support points fields
> 
>
> Key: SOLR-11155
> URL: https://issues.apache.org/jira/browse/SOLR-11155
> Project: Solr
>  Issue Type: Bug
>Reporter: Steven Rowe
>Assignee: Steven Rowe
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: 7.0, 7.1, 8.0
>
> Attachments: SOLR-11155.patch, SOLR-11155.patch, SOLR-11155.patch
>
>
> The following added to FieldAnalysisRequestHandlerTest currently fails:
> {code:java}
>   @Test
>   public void testIntPoint() throws Exception {
> FieldAnalysisRequest request = new FieldAnalysisRequest();
> request.addFieldType("pint");
> request.setFieldValue("5");
> handler.handleAnalysisRequest(request, h.getCore().getLatestSchema());
>   }
> {code}
> as follows:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=FieldAnalysisRequestHandlerTest -Dtests.method=testIntPoint 
> -Dtests.seed=167CC259812871FB -Dtests.slow=true -Dtests.locale=fi-FI 
> -Dtests.timezone=Asia/Hebron -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] ERROR   0.01s | FieldAnalysisRequestHandlerTest.testIntPoint <<<
>[junit4]> Throwable #1: java.lang.UnsupportedOperationException: Can't 
> generate internal string in PointField. use PointField.toInternalByteRef
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([167CC259812871FB:6BF651CEF8FF5B04]:0)
>[junit4]>  at 
> org.apache.solr.schema.PointField.toInternal(PointField.java:187)
>[junit4]>  at 
> org.apache.solr.schema.FieldType$DefaultAnalyzer$1.incrementToken(FieldType.java:488)
>[junit4]>  at 
> org.apache.solr.handler.AnalysisRequestHandlerBase.analyzeTokenStream(AnalysisRequestHandlerBase.java:188)
>[junit4]>  at 
> org.apache.solr.handler.AnalysisRequestHandlerBase.analyzeValue(AnalysisRequestHandlerBase.java:102)
>[junit4]>  at 
> org.apache.solr.handler.FieldAnalysisRequestHandler.analyzeValues(FieldAnalysisRequestHandler.java:225)
>[junit4]>  at 
> org.apache.solr.handler.FieldAnalysisRequestHandler.handleAnalysisRequest(FieldAnalysisRequestHandler.java:186)
>[junit4]>  at 
> org.apache.solr.handler.FieldAnalysisRequestHandlerTest.testIntPoint(FieldAnalysisRequestHandlerTest.java:435)
> {noformat}
> If points fields aren't supported by the FieldAnalysisRequestHandler, then 
> this should be directly stated in the error message, which should be a 4XX 
> error rather than a 5XX error.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943610#comment-16943610
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 9443f7714e8e0b9494bd287314d9ef8ce9ddaa35 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9443f77 ]

SOLR-13105: machine learning docs 26


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943609#comment-16943609
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 1db2c728bdb67aead01e02f5e2d6f4ee3401ca8b in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1db2c72 ]

SOLR-13105: machine learning docs 25


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943605#comment-16943605
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 065870bc22ce1dd19231e74814fd3bae09958e04 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=065870b ]

SOLR-13105: machine learning docs 24


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943603#comment-16943603
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 77f2a6187d2c465236576a9175afd98cdf67de3b in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=77f2a61 ]

SOLR-13105: machine learning docs 23


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943602#comment-16943602
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit bb68b3ab64dc23bc61f4fde8f7eb51159d6be144 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=bb68b3a ]

SOLR-13105: machine learning docs 22


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943598#comment-16943598
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit acea0e399da4f6f9effe640b1ce9b51c65bb9343 in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=acea0e3 ]

SOLR-13105: machine learning docs 21


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-8241) Evaluate W-TinyLfu cache

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-8241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943595#comment-16943595
 ] 

ASF subversion and git services commented on SOLR-8241:
---

Commit 8007ac0cb0c88838ba6e58e56e2bc23374c15dc4 in lucene-solr's branch 
refs/heads/master from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8007ac0 ]

SOLR-8241: Add CaffeineCache, an efficient implementation of SolrCache.


> Evaluate W-TinyLfu cache
> 
>
> Key: SOLR-8241
> URL: https://issues.apache.org/jira/browse/SOLR-8241
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Reporter: Ben Manes
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: EvictionBenchmark.png, GetPutBenchmark.png, 
> SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, SOLR-8241.patch, 
> SOLR-8241.patch, SOLR-8241.patch, caffeine-benchmark.txt, proposal.patch, 
> solr_caffeine.patch.gz, solr_jmh_results.json
>
>
> SOLR-2906 introduced an LFU cache and in-progress SOLR-3393 makes it O(1). 
> The discussions seem to indicate that the higher hit rate (vs LRU) is offset 
> by the slower performance of the implementation. An original goal appeared to 
> be to introduce ARC, a patented algorithm that uses ghost entries to retain 
> history information.
> My analysis of Window TinyLfu indicates that it may be a better option. It 
> uses a frequency sketch to compactly estimate an entry's popularity. It uses 
> LRU to capture recency and operate in O(1) time. When using available 
> academic traces the policy provides a near optimal hit rate regardless of the 
> workload.
> I'm getting ready to release the policy in Caffeine, which Solr already has a 
> dependency on. But, the code is fairly straightforward and a port into Solr's 
> caches instead is a pragmatic alternative. More interesting is what the 
> impact would be in Solr's workloads and feedback on the policy's design.
> https://github.com/ben-manes/caffeine/wiki/Efficiency



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13105) A visual guide to Solr Math Expressions and Streaming Expressions

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943596#comment-16943596
 ] 

ASF subversion and git services commented on SOLR-13105:


Commit 70cd7bb0aea693739bf87e7079a64fba6228596b in lucene-solr's branch 
refs/heads/SOLR-13105-visual from Joel Bernstein
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=70cd7bb ]

SOLR-13105: machine learning docs 20


> A visual guide to Solr Math Expressions and Streaming Expressions
> -
>
> Key: SOLR-13105
> URL: https://issues.apache.org/jira/browse/SOLR-13105
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: Screen Shot 2019-01-14 at 10.56.32 AM.png, Screen Shot 
> 2019-02-21 at 2.14.43 PM.png, Screen Shot 2019-03-03 at 2.28.35 PM.png, 
> Screen Shot 2019-03-04 at 7.47.57 PM.png, Screen Shot 2019-03-13 at 10.47.47 
> AM.png, Screen Shot 2019-03-30 at 6.17.04 PM.png
>
>
> Visualization is now a fundamental element of Solr Streaming Expressions and 
> Math Expressions. This ticket will create a visual guide to Solr Math 
> Expressions and Solr Streaming Expressions that includes *Apache Zeppelin* 
> visualization examples.
> It will also cover using the JDBC expression to *analyze* and *visualize* 
> results from any JDBC compliant data source.
> Intro from the guide:
> {code:java}
> Streaming Expressions exposes the capabilities of Solr Cloud as composable 
> functions. These functions provide a system for searching, transforming, 
> analyzing and visualizing data stored in Solr Cloud collections.
> At a high level there are four main capabilities that will be explored in the 
> documentation:
> * Searching, sampling and aggregating results from Solr.
> * Transforming result sets after they are retrieved from Solr.
> * Analyzing and modeling result sets using probability and statistics and 
> machine learning libraries.
> * Visualizing result sets, aggregations and statistical models of the data.
> {code}
>  
> A few sample visualizations are attached to the ticket.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13814) Rebalance Ram usages for Solr Cloud

2019-10-03 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-13814.
---
Resolution: Information Provided

The JIRA issue tracker is not a support portal. Please raise this question on 
the user's list at solr-u...@lucene.apache.org, see: 
(http://lucene.apache.org/solr/community.html#mailing-lists-irc) there are a 
_lot_ more people watching that list who may be able to help and you'll 
probably get responses much more quickly.

If it's determined that this really is a code issue or enhancement to Solr and 
not a configuration/usage problem, we can raise a new JIRA or reopen this one.

> Rebalance Ram usages for Solr Cloud
> ---
>
> Key: SOLR-13814
> URL: https://issues.apache.org/jira/browse/SOLR-13814
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
>Reporter: Hoan Tran Van
>Priority: Minor
>
> I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
> usually have high RAM usage > 92%. I restart but they will increase after 1-3 
> days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
> have same solr.in.sh configuration. 9 solr nodes use DIH for importing from 
> Oracle. I compare some metrics but there are not much different (except 
> server 1,7 have less core than others). How to rebalance ram usage for 9 
> servers?
> |Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
> core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
> (MB)|Total RAM|Cache size|
> |Server 1|12M|10M|17|37|54|2.3|190|35|1288|252|0|
> |Server 2|24M|92M|57|18|75|3.2|270|18|655|252|0|
> |Server 3|23M|95M|38|34|72|2.98|260|53|2062|252|0|
> |Server 4|16M|99M|28|48|76|3.03|270|30|1329|252|0|
> |Server 5|15M|15M|67|3|70|2.87|255|15|1535|252|0|
> |Server 6|15M|11M|37|32|69|2.85|250|71|828|252|0|
> |Server 7|14M|10M|10|61|71|2.3|191|38|2459|252|0|
> |Server 8|17M|12M|61|12|73|3|266|86|1648|252|0|
> |Server 9|15M|12M|16|59|75|3.11|270|58|778|252|0|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] thomaswoeckinger commented on issue #665: Fixes SOLR-13539

2019-10-03 Thread GitBox
thomaswoeckinger commented on issue #665: Fixes SOLR-13539
URL: https://github.com/apache/lucene-solr/pull/665#issuecomment-537921984
 
 
   > > pre commit check is still toggling, it is working locally
   > 
   > Precommit has some known issues on master that make it a little flaky 
everywhere. But it does seem like it has a higher rate of failure on Github (on 
all PR's...it's not specific to this one). I wonder what that's about...
   > 
   > Running tests locally now. Will merge later this morning assuming things 
check out.
   
   Great to hear.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] gerlowskija commented on issue #665: Fixes SOLR-13539

2019-10-03 Thread GitBox
gerlowskija commented on issue #665: Fixes SOLR-13539
URL: https://github.com/apache/lucene-solr/pull/665#issuecomment-537920701
 
 
   > pre commit check is still toggling, it is working locally
   
   Precommit has some known issues on master that make it a little flaky 
everywhere.  But it does seem like it has a higher rate of failure on Github 
(on all PR's...it's not specific to this one).  I wonder what that's about...
   
   Running tests locally now.  Will merge later this morning assuming things 
check out.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8993) Change Maven POM repository URLs to https

2019-10-03 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943501#comment-16943501
 ] 

Uwe Schindler commented on LUCENE-8993:
---

I also updated the ASF parent POM to latest version.

> Change Maven POM repository URLs to https
> -
>
> Key: LUCENE-8993
> URL: https://issues.apache.org/jira/browse/LUCENE-8993
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 7.7.2, 8.2, 8.1.1
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0), 8.3
>
> Attachments: LUCENE-8993.patch
>
>
> After fixing LUCENE-8807 I figured out today, that Lucene's build system uses 
> HTTPS URLs everywhere. But the POMs deployed to Maven central still use http 
> (I assumed that those are inherited from the ANT build).
> This will fix it for later versions by changing the POM templates. Hopefully 
> this will not happen in Gradle!
> [~markrmil...@gmail.com]: Can you make sure that the new Gradle build uses 
> HTTPS for all hard configured repositories (like Cloudera)?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8993) Change Maven POM repository URLs to https

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943499#comment-16943499
 ] 

ASF subversion and git services commented on LUCENE-8993:
-

Commit 9d21418dfcc5c884f45ab668579b0391965a18bb in lucene-solr's branch 
refs/heads/branch_8x from Uwe Schindler
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9d21418 ]

LUCENE-8993: Also update to latest version of Apache Parent POM


> Change Maven POM repository URLs to https
> -
>
> Key: LUCENE-8993
> URL: https://issues.apache.org/jira/browse/LUCENE-8993
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 7.7.2, 8.2, 8.1.1
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0), 8.3
>
> Attachments: LUCENE-8993.patch
>
>
> After fixing LUCENE-8807 I figured out today, that Lucene's build system uses 
> HTTPS URLs everywhere. But the POMs deployed to Maven central still use http 
> (I assumed that those are inherited from the ANT build).
> This will fix it for later versions by changing the POM templates. Hopefully 
> this will not happen in Gradle!
> [~markrmil...@gmail.com]: Can you make sure that the new Gradle build uses 
> HTTPS for all hard configured repositories (like Cloudera)?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8993) Change Maven POM repository URLs to https

2019-10-03 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-8993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943497#comment-16943497
 ] 

ASF subversion and git services commented on LUCENE-8993:
-

Commit 2bdfc39d89c2633edf26271aca2809abe06af8f0 in lucene-solr's branch 
refs/heads/master from Uwe Schindler
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=2bdfc39 ]

LUCENE-8993: Also update to latest version of Apache Parent POM


> Change Maven POM repository URLs to https
> -
>
> Key: LUCENE-8993
> URL: https://issues.apache.org/jira/browse/LUCENE-8993
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/build
>Affects Versions: 7.7.2, 8.2, 8.1.1
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0), 8.3
>
> Attachments: LUCENE-8993.patch
>
>
> After fixing LUCENE-8807 I figured out today, that Lucene's build system uses 
> HTTPS URLs everywhere. But the POMs deployed to Maven central still use http 
> (I assumed that those are inherited from the ANT build).
> This will fix it for later versions by changing the POM templates. Hopefully 
> this will not happen in Gradle!
> [~markrmil...@gmail.com]: Can you make sure that the new Gradle build uses 
> HTTPS for all hard configured repositories (like Cloudera)?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13814) Rebalance Ram usages for Solr Cloud

2019-10-03 Thread Shawn Heisey (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943460#comment-16943460
 ] 

Shawn Heisey commented on SOLR-13814:
-

It is completely normal for any Java program to eventually use all of the max 
heap that you have told it that it can have.  That is the nature of Java.  If 
you want it to use less memory, give it a lower max heap, and if it's possible 
for the program to run with less heao then it will.

If you're looking at the physical memory usage, if your indexes are big enough, 
it is also completely normal for Solr to cause the OS to use all of the 
un-allocated physical memory for disk caching purposes.  With terabytes of 
index, your indexes definitely qualify as big enough.  If any program on your 
system suddenly needs more memory than is currently available, the OS will 
sacrifice the buffers/cache memory to allow the program to use it.


> Rebalance Ram usages for Solr Cloud
> ---
>
> Key: SOLR-13814
> URL: https://issues.apache.org/jira/browse/SOLR-13814
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
>Reporter: Hoan Tran Van
>Priority: Minor
>
> I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
> usually have high RAM usage > 92%. I restart but they will increase after 1-3 
> days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
> have same solr.in.sh configuration. 9 solr nodes use DIH for importing from 
> Oracle. I compare some metrics but there are not much different (except 
> server 1,7 have less core than others). How to rebalance ram usage for 9 
> servers?
> |Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
> core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
> (MB)|Total RAM|Cache size|
> |Server 1|12M|10M|17|37|54|2.3|190|35|1288|252|0|
> |Server 2|24M|92M|57|18|75|3.2|270|18|655|252|0|
> |Server 3|23M|95M|38|34|72|2.98|260|53|2062|252|0|
> |Server 4|16M|99M|28|48|76|3.03|270|30|1329|252|0|
> |Server 5|15M|15M|67|3|70|2.87|255|15|1535|252|0|
> |Server 6|15M|11M|37|32|69|2.85|250|71|828|252|0|
> |Server 7|14M|10M|10|61|71|2.3|191|38|2459|252|0|
> |Server 8|17M|12M|61|12|73|3|266|86|1648|252|0|
> |Server 9|15M|12M|16|59|75|3.11|270|58|778|252|0|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13812) SolrTestCaseJ4.params(String...) javadocs, uneven rejection, basic test coverage

2019-10-03 Thread Munendra S N (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16943456#comment-16943456
 ] 

Munendra S N commented on SOLR-13812:
-

[~cpoerschke]
One small suggestion, instead of try-catch for exception and message 
validation, could you please use {{expectThrows()}}

> SolrTestCaseJ4.params(String...) javadocs, uneven rejection, basic test 
> coverage
> 
>
> Key: SOLR-13812
> URL: https://issues.apache.org/jira/browse/SOLR-13812
> Project: Solr
>  Issue Type: Test
>Reporter: Diego Ceccarelli
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-13812.patch
>
>
> In 
> https://github.com/apache/lucene-solr/commit/4fedd7bd77219223cb09a660a3e2ce0e89c26eea#diff-21d4224105244d0fb50fe7e586a8495d
>  on https://github.com/apache/lucene-solr/pull/300 for SOLR-11831 
> [~diegoceccarelli] proposes to add javadocs and uneven length parameter 
> rejection for the {{SolrTestCaseJ4.params(String...)}} method.
> This ticket proposes to do that plus to also add basic test coverage for the 
> method, separately from the unrelated SOLR-11831 changes.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] thomaswoeckinger edited a comment on issue #665: Fixes SOLR-13539

2019-10-03 Thread GitBox
thomaswoeckinger edited a comment on issue #665: Fixes SOLR-13539
URL: https://github.com/apache/lucene-solr/pull/665#issuecomment-537648923
 
 
   pre commit check is still toggling, it is working locally 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13814) Rebalance Ram usages for Solr Cloud

2019-10-03 Thread Hoan Tran Van (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoan Tran Van updated SOLR-13814:
-
Description: 
I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
usually have high RAM usage > 92%. I restart but they will increase after 1-3 
days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
have same solr.in.sh configuration. 9 solr nodes use DIH for importing from 
Oracle. I compare some metrics but there are not much different (except server 
1,7 have less core than others). How to rebalance ram usage for 9 servers?
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58|778|252|0|

  was:
I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
usually have high RAM usage > 92%. I restart but they will increase after 1-3 
days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
have same solr.in.sh configuration. 9 solr node use DIH for importing from 
Oracle. I compare some metrics but there are not much different (except server 
1,7 have less core than others). How to rebalance ram usage for 9 servers?
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58|778|252|0|


> Rebalance Ram usages for Solr Cloud
> ---
>
> Key: SOLR-13814
> URL: https://issues.apache.org/jira/browse/SOLR-13814
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
>Reporter: Hoan Tran Van
>Priority: Minor
>
> I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
> usually have high RAM usage > 92%. I restart but they will increase after 1-3 
> days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
> have same solr.in.sh configuration. 9 solr nodes use DIH for importing from 
> Oracle. I compare some metrics but there are not much different (except 
> server 1,7 have less core than others). How to rebalance ram usage for 9 
> servers?
> |Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
> core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
> (MB)|Total RAM|Cache size|
> |Server 1|12M|10M|17|37|54|2.3|190|35|1288|252|0|
> |Server 2|24M|92M|57|18|75|3.2|270|18|655|252|0|
> |Server 3|23M|95M|38|34|72|2.98|260|53|2062|252|0|
> |Server 4|16M|99M|28|48|76|3.03|270|30|1329|252|0|
> |Server 5|15M|15M|67|3|70|2.87|255|15|1535|252|0|
> |Server 6|15M|11M|37|32|69|2.85|250|71|828|252|0|
> |Server 7|14M|10M|10|61|71|2.3|191|38|2459|252|0|
> |Server 8|17M|12M|61|12|73|3|266|86|1648|252|0|
> |Server 9|15M|12M|16|59|75|3.11|270|58|778|252|0|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13814) Rebalance Ram usages for Solr Cloud

2019-10-03 Thread Hoan Tran Van (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoan Tran Van updated SOLR-13814:
-
Description: 
I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
usually have high RAM usage > 92%. I restart but they will increase after 1-3 
days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
have same solr.in.sh configuration. 9 solr node use DIH for importing from 
Oracle. I compare some metrics but there are not much different (except server 
1,7 have less core than others). How to rebalance ram usage for 9 servers?
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58|778|252|0|

  was:
I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
usually have high RAM usage > 92%. I restart but they will increase after 1-3 
days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
have same solr.in.sh configuration. 9 solr node use DIH for importing from 
Oracle. I compare some metrics but there are not much different (except server 
1,7 have less core than others). How to rebalance ram usage for 9 servers?
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|


> Rebalance Ram usages for Solr Cloud
> ---
>
> Key: SOLR-13814
> URL: https://issues.apache.org/jira/browse/SOLR-13814
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
>Reporter: Hoan Tran Van
>Priority: Minor
>
> I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
> usually have high RAM usage > 92%. I restart but they will increase after 1-3 
> days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
> have same solr.in.sh configuration. 9 solr node use DIH for importing from 
> Oracle. I compare some metrics but there are not much different (except 
> server 1,7 have less core than others). How to rebalance ram usage for 9 
> servers?
> |Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
> core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
> (MB)|Total RAM|Cache size|
> |Server 1|12M|10M|17|37|54|2.3|190|35|1288|252|0|
> |Server 2|24M|92M|57|18|75|3.2|270|18|655|252|0|
> |Server 3|23M|95M|38|34|72|2.98|260|53|2062|252|0|
> |Server 4|16M|99M|28|48|76|3.03|270|30|1329|252|0|
> |Server 5|15M|15M|67|3|70|2.87|255|15|1535|252|0|
> |Server 6|15M|11M|37|32|69|2.85|250|71|828|252|0|
> |Server 7|14M|10M|10|61|71|2.3|191|38|2459|252|0|
> |Server 8|17M|12M|61|12|73|3|266|86|1648|252|0|
> |Server 9|15M|12M|16|59|75|3.11|270|58|778|252|0|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13814) Rebalance Ram usages for Solr Cloud

2019-10-03 Thread Hoan Tran Van (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoan Tran Van updated SOLR-13814:
-
Description: 
I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
usually have high RAM usage > 92%. I restart but they will increase after 1-3 
days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
have same solr.in.sh configuration. 9 solr node use DIH for importing from 
Oracle. I compare some metrics but there are not much different (except server 
1,7 have less core than others). How to rebalance ram usage for 9 servers?
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|

  was:
I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
usually have high RAM usage > 92%. I restart but they will increase after 1-3 
days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
have same solr.in.sh configuration. 9 solr node use DIH for importing from 
Oracle. I compare some metrics but there are not much different (except server 
1,7 have less core than others). How to rebalance ram usage for 9 server? 
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|


> Rebalance Ram usages for Solr Cloud
> ---
>
> Key: SOLR-13814
> URL: https://issues.apache.org/jira/browse/SOLR-13814
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
>Reporter: Hoan Tran Van
>Priority: Minor
>
> I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
> usually have high RAM usage > 92%. I restart but they will increase after 1-3 
> days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
> have same solr.in.sh configuration. 9 solr node use DIH for importing from 
> Oracle. I compare some metrics but there are not much different (except 
> server 1,7 have less core than others). How to rebalance ram usage for 9 
> servers?
> |Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
> core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
> (MB)|Total RAM|Cache size|
> |Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
> |Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
> |Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
> |Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
> |Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
> |Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
> |Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
> |Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
> |Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13814) Rebalance Ram usages for Solr Cloud

2019-10-03 Thread Hoan Tran Van (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoan Tran Van updated SOLR-13814:
-
Description: 
I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
usually have high RAM usage > 92%. I restart but they will increase after 1-3 
days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
have same solr.in.sh configuration. 9 solr node use DIH for importing from 
Oracle. I compare some metrics but there are not much different (except server 
1,7 have less core than others). How to rebalance ram usage for 9 server? 
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|

  was:
I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
usually have high RAM usage > 92%. I restart but they will increase after 1-3 
days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
have same solr.in.sh configuration. 9 solr node use DIH for importing from 
Oracle. I compare some metrics but there are not much different (except server 
1,7 have less core than others)
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|


> Rebalance Ram usages for Solr Cloud
> ---
>
> Key: SOLR-13814
> URL: https://issues.apache.org/jira/browse/SOLR-13814
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
>Reporter: Hoan Tran Van
>Priority: Minor
>
> I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
> usually have high RAM usage > 92%. I restart but they will increase after 1-3 
> days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
> have same solr.in.sh configuration. 9 solr node use DIH for importing from 
> Oracle. I compare some metrics but there are not much different (except 
> server 1,7 have less core than others). How to rebalance ram usage for 9 
> server? 
> |Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
> core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
> (MB)|Total RAM|Cache size|
> |Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
> |Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
> |Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
> |Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
> |Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
> |Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
> |Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
> |Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
> |Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13814) Rebalance Ram usages for Solr Cloud

2019-10-03 Thread Hoan Tran Van (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoan Tran Van updated SOLR-13814:
-
Description: 
I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
usually have high RAM usage > 92%. I restart but they will increase after 1-3 
days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
have same solr.in.sh configuration. 9 solr node use DIH for importing from 
Oracle. I compare some metrics but there are not much different (except server 
1,7 have less core than others)
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|

  was:
I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
usually have high RAM usage > 92%. I restart but they will increase after 1-3 
days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
have same solr.in.sh configuration. 9 solr node use DIH for importing from 
Oracle. I compare some metrics but there are not much different (except server 
1 have less core than others)
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|


> Rebalance Ram usages for Solr Cloud
> ---
>
> Key: SOLR-13814
> URL: https://issues.apache.org/jira/browse/SOLR-13814
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
>Reporter: Hoan Tran Van
>Priority: Minor
>
> I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
> usually have high RAM usage > 92%. I restart but they will increase after 1-3 
> days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
> have same solr.in.sh configuration. 9 solr node use DIH for importing from 
> Oracle. I compare some metrics but there are not much different (except 
> server 1,7 have less core than others)
> |Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
> core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
> (MB)|Total RAM|Cache size|
> |Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
> |Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
> |Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
> |Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
> |Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
> |Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
> |Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
> |Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
> |Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13814) Rebalance Ram usages for Solr Cloud

2019-10-03 Thread Hoan Tran Van (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoan Tran Van updated SOLR-13814:
-
Description: 
I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
usually have high RAM usage > 92%. I restart but they will increase after 1-3 
days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
have same solr.in.sh configuration. 9 solr node use DIH for importing from 
Oracle. I compare some metrics but there are not much different (except server 
1 have less core than others)
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|

  was:
I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
usually have high RAM usage > 92%. I restart but they will increase after 1-3 
days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
have same solr.in.sh configuration. 9 solr node use DIH for import from Oracle. 
I compare some metrics but there are not much different (except server 1 have 
less core than others)
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|


> Rebalance Ram usages for Solr Cloud
> ---
>
> Key: SOLR-13814
> URL: https://issues.apache.org/jira/browse/SOLR-13814
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
>Reporter: Hoan Tran Van
>Priority: Minor
>
> I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
> usually have high RAM usage > 92%. I restart but they will increase after 1-3 
> days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
> have same solr.in.sh configuration. 9 solr node use DIH for importing from 
> Oracle. I compare some metrics but there are not much different (except 
> server 1 have less core than others)
> |Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
> core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
> (MB)|Total RAM|Cache size|
> |Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
> |Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
> |Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
> |Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
> |Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
> |Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
> |Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
> |Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
> |Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-13814) Rebalance Ram usages for Solr Cloud

2019-10-03 Thread Hoan Tran Van (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoan Tran Van updated SOLR-13814:
-
Description: 
I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
usually have high RAM usage > 92%. I restart but they will increase after 1-3 
days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
have same solr.in.sh configuration. 9 solr node use DIH for import from Oracle. 
I compare some metrics but there are not much different (except server 1 have 
less core than others)
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|

  was:
I have a SolrCloud with 9 nodes. the server 2, 4, 5 usually have high RAM usage 
> 92%. I restart but they will increase after 1-3 days, others server never 
reach 90% RAM. 9 sever are similar. 9 solr nodes have same solr.in.sh 
configuration. 9 server used DIH for import from Oracle. I compare some metrics 
but there are not much different (excep server 1 have less core than others)
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|


> Rebalance Ram usages for Solr Cloud
> ---
>
> Key: SOLR-13814
> URL: https://issues.apache.org/jira/browse/SOLR-13814
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
>Reporter: Hoan Tran Van
>Priority: Minor
>
> I have a SolrCloud with 9 nodes on 9 physical servers. the server 2, 4, 5 
> usually have high RAM usage > 92%. I restart but they will increase after 1-3 
> days, others servers never reach 90% RAM. 9 severs are similar. 9 solr nodes 
> have same solr.in.sh configuration. 9 solr node use DIH for import from 
> Oracle. I compare some metrics but there are not much different (except 
> server 1 have less core than others)
> |Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
> core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
> (MB)|Total RAM|Cache size|
> |Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
> |Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
> |Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
> |Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
> |Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
> |Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
> |Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
> |Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
> |Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-13814) Rebalance Ram usages for Solr Cloud

2019-10-03 Thread Hoan Tran Van (Jira)
Hoan Tran Van created SOLR-13814:


 Summary: Rebalance Ram usages for Solr Cloud
 Key: SOLR-13814
 URL: https://issues.apache.org/jira/browse/SOLR-13814
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Affects Versions: 6.6
Reporter: Hoan Tran Van


I have a SolrCloud with 9 nodes. the server 2, 4, 5 usually have high RAM usage 
> 92%. I restart but they will increase after 1-3 days, others server never 
reach 90% RAM. 9 sever are similar. 9 solr nodes have same solr.in.sh 
configuration. 9 server used DIH for import from Oracle. I compare some metrics 
but there are not much different (excep server 1 have less core than others)
|Server|Total request last 12h|Total update last 12h|Replica|Leader|Total 
core|index size (TB)|Number of doc (billion)|-/+ buffers/cache (GB)|Ram free 
(MB)|Total RAM|Cache size|
|Server 1|12M|10M|17|37|54|2.3|190|35.00|1288|252|0|
|Server 2|24M|92M|57|18|75|3.2|270|18.00|655|252|0|
|Server 3|23M|95M|38|34|72|2.98|260|53.00|2062|252|0|
|Server 4|16M|99M|28|48|76|3.03|270|30.00|1329|252|0|
|Server 5|15M|15M|67|3|70|2.87|255|15.00|1535|252|0|
|Server 6|15M|11M|37|32|69|2.85|250|71.00|828|252|0|
|Server 7|14M|10M|10|61|71|2.3|191|38.00|2459|252|0|
|Server 8|17M|12M|61|12|73|3|266|86.00|1648|252|0|
|Server 9|15M|12M|16|59|75|3.11|270|58.00|778|252|0|



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org