[JENKINS] Lucene-Solr-BadApples-NightlyTests-8.x - Build # 2 - Still Failing

2019-01-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-8.x/2/

9 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([57D72FFE59354B91]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.FullSolrCloudDistribCmdsTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([57D72FFE59354B91]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.FullSolrCloudDistribCmdsTest

Error Message:
Captured an uncaught exception in thread: Thread[id=146479, name=Thread-13484, 
state=RUNNABLE, group=TGRP-FullSolrCloudDistribCmdsTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=146479, name=Thread-13484, state=RUNNABLE, 
group=TGRP-FullSolrCloudDistribCmdsTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: 
http://127.0.0.1:36956/y_yfm/collection2_shard4_replica_n17
at __randomizedtesting.SeedInfo.seed([57D72FFE59354B91]:0)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:638)
Caused by: org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: 
http://127.0.0.1:36956/y_yfm/collection2_shard4_replica_n17
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:567)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1019)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:635)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: 
http://127.0.0.1:36956/y_yfm/collection2_shard4_replica_n17
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:661)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:256)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:245)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:213)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:561)
... 6 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)

[jira] [Created] (SOLR-13170) Creating a 10x4 collection in the admin UI shows errors even though the collection is successfully created

2019-01-25 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-13170:
-

 Summary: Creating a 10x4 collection in the admin UI shows errors 
even though the collection is successfully created
 Key: SOLR-13170
 URL: https://issues.apache.org/jira/browse/SOLR-13170
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI
Affects Versions: 8.0, master (9.0)
Reporter: Erick Erickson


Steps to reproduce:

- Set up a 4-node collection (I did this locally)
- Use the admin UI to create a collection with the _default configset, 10 
shards and 4 replicas  each. Make sure to make maxShardsPerNode 10 too.
- Go ahead and create it.

The Admin UI cranks away for a bit then says "Collection to Solr lost" and 
"Collection already exists". However, the collection is created successfully.

I do see this in the logs:
ERROR (OverseerThreadFactory-9-thread-2-processing-n:192.168.1.122:8981_solr) [ 
  ] o.a.s.c.a.c.OverseerCollectionMessageHandler Collection: eoe operation: 
create failed:org.apache.solr.common.SolrException: collection already exists: 
eoe

cUrling in this command reports no errors, either on the terminal window or in 
the logs:
http://localhost:8981/solr/admin/collections?action=CREATE&name=eoe1&numShards=10&replicationFactor=4&maxShardsPerNode=10&collection.configName=_default





--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-8.x - Build # 11 - Still unstable

2019-01-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-8.x/11/

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testZipFDistribution

Error Message:
Zipf distribution not descending!!!

Stack Trace:
java.lang.Exception: Zipf distribution not descending!!!
at 
__randomizedtesting.SeedInfo.seed([13107D9927478B0A:37A510BD30EF8322]:0)
at 
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testZipFDistribution(MathExpressionTest.java:3175)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 16074 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.io.stream.MathExpressionTest
   [junit4]   2> 78191 INFO  
(SUITE-MathExpressionTest-seed#[13107D9927478B0A]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/

[jira] [Resolved] (SOLR-12767) Deprecate min_rf parameter and always include the achieved rf in the response

2019-01-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-12767.
--
   Resolution: Fixed
Fix Version/s: 7.6

> Deprecate min_rf parameter and always include the achieved rf in the response
> -
>
> Key: SOLR-12767
> URL: https://issues.apache.org/jira/browse/SOLR-12767
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Fix For: 7.6
>
> Attachments: SOLR-12767.patch, SOLR-12767.patch, SOLR-12767.patch, 
> SOLR-12767.patch, SOLR-12767.patch, SOLR-12767.patch
>
>
> Currently the {{min_rf}} parameter does two things.
> 1. It tells Solr that the user wants to keep track of the achieved 
> replication factor
> 2. (undocumented AFAICT) It prevents Solr from putting replicas in recovery 
> if the achieved replication factor is lower than the {{min_rf}} specified
> #2 is intentional and I believe the reason behind it is to prevent replicas 
> to go into recovery in cases of short hiccups (since the assumption is that 
> the user is going to retry the request anyway). This is dangerous because if 
> the user doesn’t retry (or retries a number of times but keeps failing) the 
> replicas will be permanently inconsistent. Also, since we now retry updates 
> from leaders to replicas, this behavior has less value, since short temporary 
> blips should be recovered by those retries anyway. 
> I think we should remove the behavior described in #2, #1 is still valuable, 
> but there isn’t much point of making the parameter an integer, the user is 
> just telling Solr that they want the achieved replication factor, so it could 
> be a boolean, but I’m thinking we probably don’t even want to expose the 
> parameter, and just always keep track of it, and include it in the response. 
> It’s not costly to calculate, so why keep two separate code paths?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13168) tlog replicas wait for sync on every commit when solr is run with java assertions

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752819#comment-16752819
 ] 

ASF subversion and git services commented on SOLR-13168:


Commit ec6835906518b97ea03bbdb3b01442711cf9f943 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ec68359 ]

SOLR-13168: Fixed a bug in TestInjection that caused test only code to be 
invoked when TLOG replicas recieved commits if java assertions were enabled

(see also: SOLR-12313)


> tlog replicas wait for sync on every commit when solr is run with java 
> assertions
> -
>
> Key: SOLR-13168
> URL: https://issues.apache.org/jira/browse/SOLR-13168
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> Due to a bug in how {{TestInjection.waitForInSyncWithLeader}} was 
> implemented, the test injection code can "leak" into non-test instances of 
> solr in situations where java assertions were enabled at run time.
> This results in tlog replicas stalling on commit commands, and waiting for 
> the regular scheduled/timed replication to take place before allowing the 
> commit to succeed -- meaning that the commit commands can time out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12313) TestInjection#waitForInSyncWithLeader needs improvement.

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752820#comment-16752820
 ] 

ASF subversion and git services commented on SOLR-12313:


Commit ec6835906518b97ea03bbdb3b01442711cf9f943 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ec68359 ]

SOLR-13168: Fixed a bug in TestInjection that caused test only code to be 
invoked when TLOG replicas recieved commits if java assertions were enabled

(see also: SOLR-12313)


> TestInjection#waitForInSyncWithLeader needs improvement.
> 
>
> Key: SOLR-12313
> URL: https://issues.apache.org/jira/browse/SOLR-12313
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>
> This really should have some doc for why it would be used.
> I also think it causes BasicDistributedZkTest to take forever for sometimes 
> and perhaps other tests?
> I think checking for uncommitted data is probably a race condition and should 
> be removed.
> Checking index versions should follow the rules that replication does - if 
> the slave is higher than the leader, it's in sync, being equal is not 
> required. If it's expected for a test it should be a specific test that 
> fails. This just introduces massive delays.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13168) tlog replicas wait for sync on every commit when solr is run with java assertions

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752815#comment-16752815
 ] 

ASF subversion and git services commented on SOLR-13168:


Commit b7a8ca98b6e42e1d48952cd20f1957c19cf3b73b in lucene-solr's branch 
refs/heads/branch_7x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b7a8ca9 ]

SOLR-13168: Fixed a bug in TestInjection that caused test only code to be 
invoked when TLOG replicas recieved commits if java assertions were enabled

(see also: SOLR-12313)

(cherry picked from commit ec6835906518b97ea03bbdb3b01442711cf9f943)


> tlog replicas wait for sync on every commit when solr is run with java 
> assertions
> -
>
> Key: SOLR-13168
> URL: https://issues.apache.org/jira/browse/SOLR-13168
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> Due to a bug in how {{TestInjection.waitForInSyncWithLeader}} was 
> implemented, the test injection code can "leak" into non-test instances of 
> solr in situations where java assertions were enabled at run time.
> This results in tlog replicas stalling on commit commands, and waiting for 
> the regular scheduled/timed replication to take place before allowing the 
> commit to succeed -- meaning that the commit commands can time out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13168) tlog replicas wait for sync on every commit when solr is run with java assertions

2019-01-25 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-13168.
-
   Resolution: Fixed
Fix Version/s: master (9.0)
   7.7
   8.0

> tlog replicas wait for sync on every commit when solr is run with java 
> assertions
> -
>
> Key: SOLR-13168
> URL: https://issues.apache.org/jira/browse/SOLR-13168
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: 8.0, 7.7, master (9.0)
>
>
> Due to a bug in how {{TestInjection.waitForInSyncWithLeader}} was 
> implemented, the test injection code can "leak" into non-test instances of 
> solr in situations where java assertions were enabled at run time.
> This results in tlog replicas stalling on commit commands, and waiting for 
> the regular scheduled/timed replication to take place before allowing the 
> commit to succeed -- meaning that the commit commands can time out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12313) TestInjection#waitForInSyncWithLeader needs improvement.

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752816#comment-16752816
 ] 

ASF subversion and git services commented on SOLR-12313:


Commit b7a8ca98b6e42e1d48952cd20f1957c19cf3b73b in lucene-solr's branch 
refs/heads/branch_7x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b7a8ca9 ]

SOLR-13168: Fixed a bug in TestInjection that caused test only code to be 
invoked when TLOG replicas recieved commits if java assertions were enabled

(see also: SOLR-12313)

(cherry picked from commit ec6835906518b97ea03bbdb3b01442711cf9f943)


> TestInjection#waitForInSyncWithLeader needs improvement.
> 
>
> Key: SOLR-12313
> URL: https://issues.apache.org/jira/browse/SOLR-12313
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>
> This really should have some doc for why it would be used.
> I also think it causes BasicDistributedZkTest to take forever for sometimes 
> and perhaps other tests?
> I think checking for uncommitted data is probably a race condition and should 
> be removed.
> Checking index versions should follow the rules that replication does - if 
> the slave is higher than the leader, it's in sync, being equal is not 
> required. If it's expected for a test it should be a specific test that 
> fails. This just introduces massive delays.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13168) tlog replicas wait for sync on every commit when solr is run with java assertions

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752817#comment-16752817
 ] 

ASF subversion and git services commented on SOLR-13168:


Commit fa22ab89563a543917db13d156db7190a25bc5a4 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fa22ab8 ]

SOLR-13168: Fixed a bug in TestInjection that caused test only code to be 
invoked when TLOG replicas recieved commits if java assertions were enabled

(see also: SOLR-12313)

(cherry picked from commit ec6835906518b97ea03bbdb3b01442711cf9f943)


> tlog replicas wait for sync on every commit when solr is run with java 
> assertions
> -
>
> Key: SOLR-13168
> URL: https://issues.apache.org/jira/browse/SOLR-13168
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> Due to a bug in how {{TestInjection.waitForInSyncWithLeader}} was 
> implemented, the test injection code can "leak" into non-test instances of 
> solr in situations where java assertions were enabled at run time.
> This results in tlog replicas stalling on commit commands, and waiting for 
> the regular scheduled/timed replication to take place before allowing the 
> commit to succeed -- meaning that the commit commands can time out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12313) TestInjection#waitForInSyncWithLeader needs improvement.

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752818#comment-16752818
 ] 

ASF subversion and git services commented on SOLR-12313:


Commit fa22ab89563a543917db13d156db7190a25bc5a4 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=fa22ab8 ]

SOLR-13168: Fixed a bug in TestInjection that caused test only code to be 
invoked when TLOG replicas recieved commits if java assertions were enabled

(see also: SOLR-12313)

(cherry picked from commit ec6835906518b97ea03bbdb3b01442711cf9f943)


> TestInjection#waitForInSyncWithLeader needs improvement.
> 
>
> Key: SOLR-12313
> URL: https://issues.apache.org/jira/browse/SOLR-12313
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>
> This really should have some doc for why it would be used.
> I also think it causes BasicDistributedZkTest to take forever for sometimes 
> and perhaps other tests?
> I think checking for uncommitted data is probably a race condition and should 
> be removed.
> Checking index versions should follow the rules that replication does - if 
> the slave is higher than the leader, it's in sync, being equal is not 
> required. If it's expected for a test it should be a specific test that 
> fails. This just introduces massive delays.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1759 - Failure

2019-01-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1759/

2 tests failed.
FAILED:  
org.apache.lucene.search.TestSearcherManager.testConcurrentIndexCloseSearchAndRefresh

Error Message:
Captured an uncaught exception in thread: Thread[id=7720, name=Thread-7439, 
state=RUNNABLE, group=TGRP-TestSearcherManager]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=7720, name=Thread-7439, state=RUNNABLE, 
group=TGRP-TestSearcherManager]
Caused by: java.lang.RuntimeException: java.nio.file.FileSystemException: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/build/core/test/J1/temp/lucene.search.TestSearcherManager_EB3A49793AC5C870-001/tempDir-001/_np_Asserting_0.pos:
 Too many open files
at __randomizedtesting.SeedInfo.seed([EB3A49793AC5C870]:0)
at 
org.apache.lucene.search.TestSearcherManager$11.run(TestSearcherManager.java:677)
Caused by: java.nio.file.FileSystemException: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/lucene/build/core/test/J1/temp/lucene.search.TestSearcherManager_EB3A49793AC5C870-001/tempDir-001/_np_Asserting_0.pos:
 Too many open files
at 
org.apache.lucene.mockfile.HandleLimitFS.onOpen(HandleLimitFS.java:48)
at 
org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:81)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:271)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:212)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newByteChannel(HandleTrackingFS.java:240)
at 
org.apache.lucene.mockfile.FilterFileSystemProvider.newByteChannel(FilterFileSystemProvider.java:212)
at java.nio.file.Files.newByteChannel(Files.java:361)
at java.nio.file.Files.newByteChannel(Files.java:407)
at 
org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:77)
at 
org.apache.lucene.util.LuceneTestCase.slowFileExists(LuceneTestCase.java:2801)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:742)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.(Lucene50PostingsReader.java:91)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsProducer(Lucene50PostingsFormat.java:443)
at 
org.apache.lucene.codecs.asserting.AssertingPostingsFormat.fieldsProducer(AssertingPostingsFormat.java:59)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat$FieldsReader.(PerFieldPostingsFormat.java:288)
at 
org.apache.lucene.codecs.perfield.PerFieldPostingsFormat.fieldsProducer(PerFieldPostingsFormat.java:368)
at 
org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:113)
at org.apache.lucene.index.SegmentReader.(SegmentReader.java:83)
at 
org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:172)
at 
org.apache.lucene.index.ReadersAndUpdates.getReadOnlyClone(ReadersAndUpdates.java:214)
at 
org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:106)
at org.apache.lucene.index.IndexWriter.getReader(IndexWriter.java:526)
at 
org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:103)
at 
org.apache.lucene.search.SearcherManager.(SearcherManager.java:108)
at 
org.apache.lucene.search.SearcherManager.(SearcherManager.java:76)
at 
org.apache.lucene.search.TestSearcherManager$11.run(TestSearcherManager.java:665)


FAILED:  
org.apache.solr.uninverting.TestDocTermOrdsUninvertLimit.testTriggerUnInvertLimit

Error Message:
Java heap space

Stack Trace:
java.lang.OutOfMemoryError: Java heap space
at 
__randomizedtesting.SeedInfo.seed([58B9E555CAC51058:6B0BCD91C772CAEF]:0)
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at 
org.apache.lucene.store.ByteBuffersDataOutput$$Lambda$485/1725203096.apply(Unknown
 Source)
at 
org.apache.lucene.store.ByteBuffersDataOutput.appendBlock(ByteBuffersDataOutput.java:447)
at 
org.apache.lucene.store.ByteBuffersDataOutput.writeBytes(ByteBuffersDataOutput.java:164)
at org.apache.lucene.store.DataOutput.copyBytes(DataOutput.java:278)
at 
org.apache.lucene.store.ByteBuffersIndexOutput.copyBytes(ByteBuffersIndexOutput.java:151)
at 
org.apache.lucene.store.MockIndexOutputWrapper.copyBytes(MockIndexOutputWrapper.java:165)
at 
org.apache.lucene.codecs.lucene50.Lucene50CompoundFormat.write(Lucene50CompoundFormat.java:96)
at 
org.apache.lucene.index.IndexWriter.createCompoundFile(IndexWriter.java:5010)
at 
org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4523)
at org.apache.lucene.index.IndexWriter.merge(Ind

[jira] [Commented] (SOLR-12373) Let DocBasedVersionConstraintsProcessor define fields to use in tombstones

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752699#comment-16752699
 ] 

ASF subversion and git services commented on SOLR-12373:


Commit acb6936dc2392e8585007ce47ea19724e0e75104 in lucene-solr's branch 
refs/heads/branch_7x from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=acb6936 ]

SOLR-12373: Let DocBasedVersionConstraintsProcessor define fields to use in 
tombstones

A new config option, "tombstoneConfig" allows the 
DocBasedVersionConstraintsProcessor
to add extra fields to the tombstone generated when a document is deleted. This 
can
be useful when the schema has required fields.


> Let DocBasedVersionConstraintsProcessor define fields to use in tombstones
> --
>
> Key: SOLR-12373
> URL: https://issues.apache.org/jira/browse/SOLR-12373
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 7.7
>
> Attachments: SOLR-12373.patch, SOLR-12373.patch, SOLR-12373.patch, 
> SOLR-12373.patch
>
>
> DocBasedVersionConstraintsProcessor creates tombstones when processing a 
> delete by id. Those tombstones only have id (or whatever the unique key name 
> is) and version field(s), however, if the schema defines some required 
> fields, adding the tombstone will fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12373) Let DocBasedVersionConstraintsProcessor define fields to use in tombstones

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752706#comment-16752706
 ] 

ASF subversion and git services commented on SOLR-12373:


Commit 4070f56a56663d3e1f42b5018dbad5925e5db1c8 in lucene-solr's branch 
refs/heads/branch_8x from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4070f56 ]

SOLR-12373: Remove deprecated constructor


> Let DocBasedVersionConstraintsProcessor define fields to use in tombstones
> --
>
> Key: SOLR-12373
> URL: https://issues.apache.org/jira/browse/SOLR-12373
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 7.7
>
> Attachments: SOLR-12373.patch, SOLR-12373.patch, SOLR-12373.patch, 
> SOLR-12373.patch
>
>
> DocBasedVersionConstraintsProcessor creates tombstones when processing a 
> delete by id. Those tombstones only have id (or whatever the unique key name 
> is) and version field(s), however, if the schema defines some required 
> fields, adding the tombstone will fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12373) Let DocBasedVersionConstraintsProcessor define fields to use in tombstones

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752705#comment-16752705
 ] 

ASF subversion and git services commented on SOLR-12373:


Commit ef81dadc7dcbb4a00a96a701be334f1aaecc47e4 in lucene-solr's branch 
refs/heads/master from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ef81dad ]

SOLR-12373: Remove deprecated constructor


> Let DocBasedVersionConstraintsProcessor define fields to use in tombstones
> --
>
> Key: SOLR-12373
> URL: https://issues.apache.org/jira/browse/SOLR-12373
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 7.7
>
> Attachments: SOLR-12373.patch, SOLR-12373.patch, SOLR-12373.patch, 
> SOLR-12373.patch
>
>
> DocBasedVersionConstraintsProcessor creates tombstones when processing a 
> delete by id. Those tombstones only have id (or whatever the unique key name 
> is) and version field(s), however, if the schema defines some required 
> fields, adding the tombstone will fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12373) Let DocBasedVersionConstraintsProcessor define fields to use in tombstones

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752697#comment-16752697
 ] 

ASF subversion and git services commented on SOLR-12373:


Commit 45bf00bf05ad4e0f92fddd0e11dd1f8871dc5751 in lucene-solr's branch 
refs/heads/branch_8x from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=45bf00b ]

SOLR-12373: Let DocBasedVersionConstraintsProcessor define fields to use in 
tombstones

A new config option, "tombstoneConfig" allows the 
DocBasedVersionConstraintsProcessor
to add extra fields to the tombstone generated when a document is deleted. This 
can
be useful when the schema has required fields.


> Let DocBasedVersionConstraintsProcessor define fields to use in tombstones
> --
>
> Key: SOLR-12373
> URL: https://issues.apache.org/jira/browse/SOLR-12373
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 7.7
>
> Attachments: SOLR-12373.patch, SOLR-12373.patch, SOLR-12373.patch, 
> SOLR-12373.patch
>
>
> DocBasedVersionConstraintsProcessor creates tombstones when processing a 
> delete by id. Those tombstones only have id (or whatever the unique key name 
> is) and version field(s), however, if the schema defines some required 
> fields, adding the tombstone will fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12373) Let DocBasedVersionConstraintsProcessor define fields to use in tombstones

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752696#comment-16752696
 ] 

ASF subversion and git services commented on SOLR-12373:


Commit 0bd1911db6de9f38f74fc61398bd1fc3f42037a2 in lucene-solr's branch 
refs/heads/master from Tomas Eduardo Fernandez Lobbe
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0bd1911 ]

SOLR-12373: Let DocBasedVersionConstraintsProcessor define fields to use in 
tombstones

A new config option, "tombstoneConfig" allows the 
DocBasedVersionConstraintsProcessor
to add extra fields to the tombstone generated when a document is deleted. This 
can
be useful when the schema has required fields.


> Let DocBasedVersionConstraintsProcessor define fields to use in tombstones
> --
>
> Key: SOLR-12373
> URL: https://issues.apache.org/jira/browse/SOLR-12373
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 7.7
>
> Attachments: SOLR-12373.patch, SOLR-12373.patch, SOLR-12373.patch, 
> SOLR-12373.patch
>
>
> DocBasedVersionConstraintsProcessor creates tombstones when processing a 
> delete by id. Those tombstones only have id (or whatever the unique key name 
> is) and version field(s), however, if the schema defines some required 
> fields, adding the tombstone will fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12373) Let DocBasedVersionConstraintsProcessor define fields to use in tombstones

2019-01-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-12373:
-
Fix Version/s: 7.7
   Issue Type: Improvement  (was: Bug)
  Summary: Let DocBasedVersionConstraintsProcessor define fields to use 
in tombstones  (was: DocBasedVersionConstraintsProcessor doesn't work when 
schema has required fields)

> Let DocBasedVersionConstraintsProcessor define fields to use in tombstones
> --
>
> Key: SOLR-12373
> URL: https://issues.apache.org/jira/browse/SOLR-12373
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
> Fix For: 7.7
>
> Attachments: SOLR-12373.patch, SOLR-12373.patch, SOLR-12373.patch, 
> SOLR-12373.patch
>
>
> DocBasedVersionConstraintsProcessor creates tombstones when processing a 
> delete by id. Those tombstones only have id (or whatever the unique key name 
> is) and version field(s), however, if the schema defines some required 
> fields, adding the tombstone will fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12770) Make it possible to configure a shards whitelist for master/slave

2019-01-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe resolved SOLR-12770.
--
Resolution: Fixed

> Make it possible to configure a shards whitelist for master/slave
> -
>
> Key: SOLR-12770
> URL: https://issues.apache.org/jira/browse/SOLR-12770
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Jan Høydahl
>Assignee: Tomás Fernández Löbbe
>Priority: Major
>  Labels: masterSlave
> Fix For: 7.7
>
>
> For legacy master/slave clusters, there is no Zookeeper to keep track of all 
> the nodes and shards in the cluster. So users manage the 'shards' parameter 
> manually for distributed search. This issue will add the option of 
> configuring a list of what shards can be requested.
> Users will then get an explicit error response if the request includes a 
> shard which is not in the preconfigured whitelist, e.g. due to a typo. I 
> think all shards logic is handled by HttpShardHandler already so the logic 
> should fit nicely in that one class, configured in {{solr.xml}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12770) Make it possible to configure a shards whitelist for master/slave

2019-01-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe reassigned SOLR-12770:


Assignee: Tomás Fernández Löbbe

> Make it possible to configure a shards whitelist for master/slave
> -
>
> Key: SOLR-12770
> URL: https://issues.apache.org/jira/browse/SOLR-12770
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Jan Høydahl
>Assignee: Tomás Fernández Löbbe
>Priority: Major
>  Labels: masterSlave
> Fix For: 7.7
>
>
> For legacy master/slave clusters, there is no Zookeeper to keep track of all 
> the nodes and shards in the cluster. So users manage the 'shards' parameter 
> manually for distributed search. This issue will add the option of 
> configuring a list of what shards can be requested.
> Users will then get an explicit error response if the request includes a 
> shard which is not in the preconfigured whitelist, e.g. due to a typo. I 
> think all shards logic is handled by HttpShardHandler already so the logic 
> should fit nicely in that one class, configured in {{solr.xml}}.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13169) Move Replica Docs need improvement (V1 and V2 introspect)

2019-01-25 Thread Gus Heck (JIRA)
Gus Heck created SOLR-13169:
---

 Summary: Move Replica Docs need improvement (V1 and V2 introspect)
 Key: SOLR-13169
 URL: https://issues.apache.org/jira/browse/SOLR-13169
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: v2 API
Reporter: Gus Heck


At a minimum required parameters should be noted equally in both places. 
Conversation with [~ab] indicates that there are also some discrepancies in 
what is and is not actually required in docs vs code. ("in MoveReplicaCmd if 
you specify “replica” then “shard” is completely ignored")

Also in v2 it seems shard might be inferred from the URL and in that case it's 
not clear if the URL or the json takes precedence.

>From introspect:

{code:java}
"move-replica": {
"type": "object",
"documentation": 
"https://lucene.apache.org/solr/guide/collections-api.html#movereplica";,
"description": "This command moves a replica from one node 
to a new node. In case of shared filesystems the `dataDir` and `ulogDir` may be 
reused.",
"properties": {
"replica": {
"type": "string",
"description": "The name of the replica"
},
"shard": {
"type": "string",
"description": "The name of the shard"
},
"sourceNode": {
"type": "string",
"description": "The name of the node that contains 
the replica."
},
"targetNode": {
"type": "string",
"description": "The name of the destination node. 
This parameter is required."
},
"waitForFinalState": {
"type": "boolean",
"default": "false",
"description": "Wait for the moved replica to 
become active."
},
"timeout": {
"type": "integer",
"default": 600,
"description": "Timeout to wait for replica to 
become active. For very large replicas this may need to be increased."
},
"inPlaceMove": {
"type": "boolean",
"default": "true",
"description": "For replicas that use shared 
filesystems allow 'in-place' move that reuses shared data."
}
{code}

>From ref guide for V1:
MOVEREPLICA Parameters
collection
The name of the collection. This parameter is required.
shard
The name of the shard that the replica belongs to. This parameter is required.
replica
The name of the replica. This parameter is required.
sourceNode
The name of the node that contains the replica. This parameter is required.
targetNode
The name of the destination node. This parameter is required.
async
Request ID to track this action which will be processed asynchronously.






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13165) enabling docValues on a tdate field and searching on the field is very slow

2019-01-25 Thread Sheeba Dhanaraj (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752651#comment-16752651
 ] 

Sheeba Dhanaraj commented on SOLR-13165:


I'm using schema version 1.5

> enabling docValues on a tdate field and searching on the field is very slow
> ---
>
> Key: SOLR-13165
> URL: https://issues.apache.org/jira/browse/SOLR-13165
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Sheeba Dhanaraj
>Priority: Major
>
> when we enable docValues on a tdate field and search on the field response 
> time is very slow. when we remove docValues from the field performance is 
> significantly improved. Is this by design? should we not enable docValues for 
> tdate fields



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 8.0

2019-01-25 Thread Jan Høydahl
I don't think it is critical for this to be a blocker for 8.0. If it gets fixed 
in 8.0.1 that's ok too, given this is an ooold bug.
I think we should simply remove the buffering feature in the UI and replace it 
with an error message popup or something.
I'll try to take a look next week.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 25. jan. 2019 kl. 20:39 skrev Tomás Fernández Löbbe :
> 
> I think the UI is an important Solr feature. As long as there is a reasonable 
> time horizon for the issue being resolved I'm +1 on making it a blocker. I'm 
> not familiar enough with the UI code to help either unfortunately.
> 
> On Fri, Jan 25, 2019 at 11:24 AM Gus Heck  > wrote:
> It looks like someone tried to make it a blocker once before... And it's 
> actually a duplicate of an earlier issue 
> (https://issues.apache.org/jira/browse/SOLR-9818 
> ). I guess its a question of 
> whether or not overall quality has a bearing on the decision to release. As 
> it turns out the screen shot I posted to the issue is less than half of the 
> shards that eventually got created since there was an outstanding queue of 
> requests still processing at the time. I'm now having to delete 50 or so 
> cores, which luckily are small 100 Mb initial testing cores, not the 20GB 
> cores we'll be testing on in the near future. It more or less makes it 
> impossible to recommend the use of the admin UI for anything other than read 
> only observation of the cluster. Now imagine someone leaves a browser window 
> open and forgets about it rather than browsing away or closing the window, 
> not knowing that it's silently pumping out requests after showing an error... 
> would completely hose a node, and until they tracked down the source of the 
> requests, (hope he didn't go home) it would be impossible to resolve...
> 
> On Fri, Jan 25, 2019 at 1:25 PM Adrien Grand  > wrote:
> Releasing a new major is very challenging on its own, I'd rather not
> call it a blocker and delay the release for it since this isn't a new
> regression in 8.0: it looks like a problem that has affected Solr
> since at least 6.3? I'm not familiar with the UI code at all, but
> maybe this is something that could get fixed before we build a RC?
> 
> 
> 
> 
> On Fri, Jan 25, 2019 at 6:06 PM Gus Heck  > wrote:
> >
> > I'd like to suggest that https://issues.apache.org/jira/browse/SOLR-10211 
> >  be promoted to block 
> > 8.0. I just got burned by it a second time.
> >
> > On Thu, Jan 24, 2019 at 1:05 PM Uwe Schindler  > > wrote:
> >>
> >> Cool,
> >>
> >> I am working on giving my best release time guess as possible on the 
> >> FOSDEM conference!
> >>
> >> Uwe
> >>
> >> -
> >> Uwe Schindler
> >> Achterdiek 19, D-28357 Bremen
> >> http://www.thetaphi.de 
> >> eMail: u...@thetaphi.de 
> >>
> >> > -Original Message-
> >> > From: Adrien Grand mailto:jpou...@gmail.com>>
> >> > Sent: Thursday, January 24, 2019 5:33 PM
> >> > To: Lucene Dev mailto:dev@lucene.apache.org>>
> >> > Subject: Re: Lucene/Solr 8.0
> >> >
> >> > +1 to release 7.7 and 8.0 in a row starting on the week of February 4th.
> >> >
> >> > On Wed, Jan 23, 2019 at 4:23 PM jim ferenczi  >> > >
> >> > wrote:
> >> > >
> >> > > Hi,
> >> > > As we agreed some time ago I'd like to start on releasing 8.0. The 
> >> > > branch is
> >> > already created so we can start the process anytime now. Unless there are
> >> > objections I'd like to start the feature freeze next week in order to 
> >> > build the
> >> > first candidate the week after.
> >> > > We'll also need a 7.7 release but I think we can handle both with Alan 
> >> > > so
> >> > the question now is whether we are ok to start the release process or if 
> >> > there
> >> > are any blockers left ;).
> >> > >
> >> > >
> >> > > Le mar. 15 janv. 2019 à 11:35, Alan Woodward  >> > > >
> >> > a écrit :
> >> > >>
> >> > >> I’ve started to work through the various deprecations on the new 
> >> > >> master
> >> > branch.  There are a lot of them, and I’m going to need some assistance 
> >> > for
> >> > several of them, as it’s not entirely clear what to do.
> >> > >>
> >> > >> I’ll open two overarching issues in JIRA, one for lucene and one for 
> >> > >> Solr,
> >> > with lists of the deprecations that need to be removed in each one.  
> >> > I’ll create
> >> > a shared branch on gitbox to work against, and push the changes I’ve 
> >> > already
> >> > done there.  We can then create individual JIRA issues for any changes 
> >> > that
> >> > are more involved than just deleting code.
> >> > >>
> >> > >> All assistance gratefully received, particularly for the Solr 
> >> > >> deprecations
> >> > where there’s a lot

[jira] [Updated] (SOLR-9515) Update to Hadoop 3

2019-01-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9515:
---
Fix Version/s: master (9.0)
   8.0

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch
>
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9515) Update to Hadoop 3

2019-01-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-9515:
--

Assignee: Kevin Risden  (was: Mark Miller)

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 8.0, master (9.0)
>
> Attachments: SOLR-9515.patch
>
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 8.0

2019-01-25 Thread Kevin Risden
I am hoping to take a look at upgrading the Hadoop 2.x dependencies to
3.x this weekend/upcoming week before the feature freeze. I know I am
a bit late to starting this but would be great to not be stuck on
Hadoop 2.x much longer. SOLR-9515 was filed by Mark Miller a while ago
for this. There are quite a few Solr JIRAs about issues with JDK9+ and
many of these have been fixed in the Hadoop 3.1/3.2 timeframe. I'm
hoping to sit down and figure out the details. Mark Miller had
previously put up a patch and Hrishikesh Gadre had created JIRAs
(SOLR-9761) for cleaning up some of the security pieces.

I am first looking to make sure Hadoop 3.x works on JDK8 and then can
figure out how many of the JDK9+ JIRAs have been resolved.

Kevin Risden

On Fri, Jan 25, 2019 at 2:40 PM Tomás Fernández Löbbe
 wrote:
>
> I think the UI is an important Solr feature. As long as there is a reasonable 
> time horizon for the issue being resolved I'm +1 on making it a blocker. I'm 
> not familiar enough with the UI code to help either unfortunately.
>
> On Fri, Jan 25, 2019 at 11:24 AM Gus Heck  wrote:
>>
>> It looks like someone tried to make it a blocker once before... And it's 
>> actually a duplicate of an earlier issue 
>> (https://issues.apache.org/jira/browse/SOLR-9818). I guess its a question of 
>> whether or not overall quality has a bearing on the decision to release. As 
>> it turns out the screen shot I posted to the issue is less than half of the 
>> shards that eventually got created since there was an outstanding queue of 
>> requests still processing at the time. I'm now having to delete 50 or so 
>> cores, which luckily are small 100 Mb initial testing cores, not the 20GB 
>> cores we'll be testing on in the near future. It more or less makes it 
>> impossible to recommend the use of the admin UI for anything other than read 
>> only observation of the cluster. Now imagine someone leaves a browser window 
>> open and forgets about it rather than browsing away or closing the window, 
>> not knowing that it's silently pumping out requests after showing an 
>> error... would completely hose a node, and until they tracked down the 
>> source of the requests, (hope he didn't go home) it would be impossible to 
>> resolve...
>>
>> On Fri, Jan 25, 2019 at 1:25 PM Adrien Grand  wrote:
>>>
>>> Releasing a new major is very challenging on its own, I'd rather not
>>> call it a blocker and delay the release for it since this isn't a new
>>> regression in 8.0: it looks like a problem that has affected Solr
>>> since at least 6.3? I'm not familiar with the UI code at all, but
>>> maybe this is something that could get fixed before we build a RC?
>>>
>>>
>>>
>>>
>>> On Fri, Jan 25, 2019 at 6:06 PM Gus Heck  wrote:
>>> >
>>> > I'd like to suggest that https://issues.apache.org/jira/browse/SOLR-10211 
>>> > be promoted to block 8.0. I just got burned by it a second time.
>>> >
>>> > On Thu, Jan 24, 2019 at 1:05 PM Uwe Schindler  wrote:
>>> >>
>>> >> Cool,
>>> >>
>>> >> I am working on giving my best release time guess as possible on the 
>>> >> FOSDEM conference!
>>> >>
>>> >> Uwe
>>> >>
>>> >> -
>>> >> Uwe Schindler
>>> >> Achterdiek 19, D-28357 Bremen
>>> >> http://www.thetaphi.de
>>> >> eMail: u...@thetaphi.de
>>> >>
>>> >> > -Original Message-
>>> >> > From: Adrien Grand 
>>> >> > Sent: Thursday, January 24, 2019 5:33 PM
>>> >> > To: Lucene Dev 
>>> >> > Subject: Re: Lucene/Solr 8.0
>>> >> >
>>> >> > +1 to release 7.7 and 8.0 in a row starting on the week of February 
>>> >> > 4th.
>>> >> >
>>> >> > On Wed, Jan 23, 2019 at 4:23 PM jim ferenczi 
>>> >> > wrote:
>>> >> > >
>>> >> > > Hi,
>>> >> > > As we agreed some time ago I'd like to start on releasing 8.0. The 
>>> >> > > branch is
>>> >> > already created so we can start the process anytime now. Unless there 
>>> >> > are
>>> >> > objections I'd like to start the feature freeze next week in order to 
>>> >> > build the
>>> >> > first candidate the week after.
>>> >> > > We'll also need a 7.7 release but I think we can handle both with 
>>> >> > > Alan so
>>> >> > the question now is whether we are ok to start the release process or 
>>> >> > if there
>>> >> > are any blockers left ;).
>>> >> > >
>>> >> > >
>>> >> > > Le mar. 15 janv. 2019 à 11:35, Alan Woodward 
>>> >> > a écrit :
>>> >> > >>
>>> >> > >> I’ve started to work through the various deprecations on the new 
>>> >> > >> master
>>> >> > branch.  There are a lot of them, and I’m going to need some 
>>> >> > assistance for
>>> >> > several of them, as it’s not entirely clear what to do.
>>> >> > >>
>>> >> > >> I’ll open two overarching issues in JIRA, one for lucene and one 
>>> >> > >> for Solr,
>>> >> > with lists of the deprecations that need to be removed in each one.  
>>> >> > I’ll create
>>> >> > a shared branch on gitbox to work against, and push the changes I’ve 
>>> >> > already
>>> >> > done there.  We can then create individual JIRA issues for any changes 
>>> >> > that
>>> >> > are more invo

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 3453 - Still Unstable!

2019-01-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3453/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseParallelGC

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([337223DD46AD0AC3]:0)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.createMiniSolrCloudCluster(TestStressCloudBlindAtomicUpdates.java:139)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:878)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([337223DD46AD0AC3]:0)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.afterClass(TestStressCloudBlindAtomicUpdates.java:160)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:901)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestR

Re: Lucene/Solr 8.0

2019-01-25 Thread Tomás Fernández Löbbe
I think the UI is an important Solr feature. As long as there is a
reasonable time horizon for the issue being resolved I'm +1 on making it a
blocker. I'm not familiar enough with the UI code to help either
unfortunately.

On Fri, Jan 25, 2019 at 11:24 AM Gus Heck  wrote:

> It looks like someone tried to make it a blocker once before... And it's
> actually a duplicate of an earlier issue (
> https://issues.apache.org/jira/browse/SOLR-9818). I guess its a question
> of whether or not overall quality has a bearing on the decision to release.
> As it turns out the screen shot I posted to the issue is less than half of
> the shards that eventually got created since there was an outstanding queue
> of requests still processing at the time. I'm now having to delete 50 or so
> cores, which luckily are small 100 Mb initial testing cores, not the 20GB
> cores we'll be testing on in the near future. It more or less makes it
> impossible to recommend the use of the admin UI for anything other than
> read only observation of the cluster. Now imagine someone leaves a browser
> window open and forgets about it rather than browsing away or closing the
> window, not knowing that it's silently pumping out requests after showing
> an error... would completely hose a node, and until they tracked down the
> source of the requests, (hope he didn't go home) it would be impossible to
> resolve...
>
> On Fri, Jan 25, 2019 at 1:25 PM Adrien Grand  wrote:
>
>> Releasing a new major is very challenging on its own, I'd rather not
>> call it a blocker and delay the release for it since this isn't a new
>> regression in 8.0: it looks like a problem that has affected Solr
>> since at least 6.3? I'm not familiar with the UI code at all, but
>> maybe this is something that could get fixed before we build a RC?
>>
>>
>>
>>
>> On Fri, Jan 25, 2019 at 6:06 PM Gus Heck  wrote:
>> >
>> > I'd like to suggest that
>> https://issues.apache.org/jira/browse/SOLR-10211 be promoted to block
>> 8.0. I just got burned by it a second time.
>> >
>> > On Thu, Jan 24, 2019 at 1:05 PM Uwe Schindler  wrote:
>> >>
>> >> Cool,
>> >>
>> >> I am working on giving my best release time guess as possible on the
>> FOSDEM conference!
>> >>
>> >> Uwe
>> >>
>> >> -
>> >> Uwe Schindler
>> >> Achterdiek 19, D-28357 Bremen
>> >> http://www.thetaphi.de
>> >> eMail: u...@thetaphi.de
>> >>
>> >> > -Original Message-
>> >> > From: Adrien Grand 
>> >> > Sent: Thursday, January 24, 2019 5:33 PM
>> >> > To: Lucene Dev 
>> >> > Subject: Re: Lucene/Solr 8.0
>> >> >
>> >> > +1 to release 7.7 and 8.0 in a row starting on the week of February
>> 4th.
>> >> >
>> >> > On Wed, Jan 23, 2019 at 4:23 PM jim ferenczi > >
>> >> > wrote:
>> >> > >
>> >> > > Hi,
>> >> > > As we agreed some time ago I'd like to start on releasing 8.0. The
>> branch is
>> >> > already created so we can start the process anytime now. Unless
>> there are
>> >> > objections I'd like to start the feature freeze next week in order
>> to build the
>> >> > first candidate the week after.
>> >> > > We'll also need a 7.7 release but I think we can handle both with
>> Alan so
>> >> > the question now is whether we are ok to start the release process
>> or if there
>> >> > are any blockers left ;).
>> >> > >
>> >> > >
>> >> > > Le mar. 15 janv. 2019 à 11:35, Alan Woodward > >
>> >> > a écrit :
>> >> > >>
>> >> > >> I’ve started to work through the various deprecations on the new
>> master
>> >> > branch.  There are a lot of them, and I’m going to need some
>> assistance for
>> >> > several of them, as it’s not entirely clear what to do.
>> >> > >>
>> >> > >> I’ll open two overarching issues in JIRA, one for lucene and one
>> for Solr,
>> >> > with lists of the deprecations that need to be removed in each one.
>> I’ll create
>> >> > a shared branch on gitbox to work against, and push the changes I’ve
>> already
>> >> > done there.  We can then create individual JIRA issues for any
>> changes that
>> >> > are more involved than just deleting code.
>> >> > >>
>> >> > >> All assistance gratefully received, particularly for the Solr
>> deprecations
>> >> > where there’s a lot of code I’m unfamiliar with.
>> >> > >>
>> >> > >> On 8 Jan 2019, at 09:21, Alan Woodward 
>> >> > wrote:
>> >> > >>
>> >> > >> I think the current plan is to do a 7.7 release at the same time
>> as 8.0, to
>> >> > handle any last-minute deprecations etc.  So let’s keep those jobs
>> enabled
>> >> > for now.
>> >> > >>
>> >> > >> On 8 Jan 2019, at 09:10, Uwe Schindler  wrote:
>> >> > >>
>> >> > >> Hi,
>> >> > >>
>> >> > >> I will start and add the branch_8x jobs to Jenkins once I have
>> some time
>> >> > later today.
>> >> > >>
>> >> > >> The question: How to proceed with branch_7x? Should we stop using
>> it
>> >> > and release 7.6.x only (so we would use branch_7_6 only for
>> bugfixes), or
>> >> > are we planning to one more Lucene/Solr 7.7? In the latter case I
>> would keep
>> >> > the jenkins jobs enabled for a while.
>> >> > >>
>> >> 

[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-10.0.1) - Build # 26 - Unstable!

2019-01-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/26/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.core.ExitableDirectoryReaderTest.testCacheAssumptions

Error Message:
Should have fewer docs than 100

Stack Trace:
java.lang.AssertionError: Should have fewer docs than 100
at 
__randomizedtesting.SeedInfo.seed([9BA799225C1B79D6:ECDA260532EE34F4]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.core.ExitableDirectoryReaderTest.testCacheAssumptions(ExitableDirectoryReaderTest.java:103)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 2070 lines...]
   [junit4] JVM J0: stderr was not empty, see: 
C:\Users\jenkins\workspace\Lucene-Solr-8.x-Windows\lucene\build\core\test\temp\junit4-J0-20190125_174619_5686720422552860013227.syserr
   [junit4] >>> JVM J0 emitted unexpected output (verbatim) --

Re: Lucene/Solr 8.0

2019-01-25 Thread Gus Heck
It looks like someone tried to make it a blocker once before... And it's
actually a duplicate of an earlier issue (
https://issues.apache.org/jira/browse/SOLR-9818). I guess its a question of
whether or not overall quality has a bearing on the decision to release. As
it turns out the screen shot I posted to the issue is less than half of the
shards that eventually got created since there was an outstanding queue of
requests still processing at the time. I'm now having to delete 50 or so
cores, which luckily are small 100 Mb initial testing cores, not the 20GB
cores we'll be testing on in the near future. It more or less makes it
impossible to recommend the use of the admin UI for anything other than
read only observation of the cluster. Now imagine someone leaves a browser
window open and forgets about it rather than browsing away or closing the
window, not knowing that it's silently pumping out requests after showing
an error... would completely hose a node, and until they tracked down the
source of the requests, (hope he didn't go home) it would be impossible to
resolve...

On Fri, Jan 25, 2019 at 1:25 PM Adrien Grand  wrote:

> Releasing a new major is very challenging on its own, I'd rather not
> call it a blocker and delay the release for it since this isn't a new
> regression in 8.0: it looks like a problem that has affected Solr
> since at least 6.3? I'm not familiar with the UI code at all, but
> maybe this is something that could get fixed before we build a RC?
>
>
>
>
> On Fri, Jan 25, 2019 at 6:06 PM Gus Heck  wrote:
> >
> > I'd like to suggest that
> https://issues.apache.org/jira/browse/SOLR-10211 be promoted to block
> 8.0. I just got burned by it a second time.
> >
> > On Thu, Jan 24, 2019 at 1:05 PM Uwe Schindler  wrote:
> >>
> >> Cool,
> >>
> >> I am working on giving my best release time guess as possible on the
> FOSDEM conference!
> >>
> >> Uwe
> >>
> >> -
> >> Uwe Schindler
> >> Achterdiek 19, D-28357 Bremen
> >> http://www.thetaphi.de
> >> eMail: u...@thetaphi.de
> >>
> >> > -Original Message-
> >> > From: Adrien Grand 
> >> > Sent: Thursday, January 24, 2019 5:33 PM
> >> > To: Lucene Dev 
> >> > Subject: Re: Lucene/Solr 8.0
> >> >
> >> > +1 to release 7.7 and 8.0 in a row starting on the week of February
> 4th.
> >> >
> >> > On Wed, Jan 23, 2019 at 4:23 PM jim ferenczi 
> >> > wrote:
> >> > >
> >> > > Hi,
> >> > > As we agreed some time ago I'd like to start on releasing 8.0. The
> branch is
> >> > already created so we can start the process anytime now. Unless there
> are
> >> > objections I'd like to start the feature freeze next week in order to
> build the
> >> > first candidate the week after.
> >> > > We'll also need a 7.7 release but I think we can handle both with
> Alan so
> >> > the question now is whether we are ok to start the release process or
> if there
> >> > are any blockers left ;).
> >> > >
> >> > >
> >> > > Le mar. 15 janv. 2019 à 11:35, Alan Woodward 
> >> > a écrit :
> >> > >>
> >> > >> I’ve started to work through the various deprecations on the new
> master
> >> > branch.  There are a lot of them, and I’m going to need some
> assistance for
> >> > several of them, as it’s not entirely clear what to do.
> >> > >>
> >> > >> I’ll open two overarching issues in JIRA, one for lucene and one
> for Solr,
> >> > with lists of the deprecations that need to be removed in each one.
> I’ll create
> >> > a shared branch on gitbox to work against, and push the changes I’ve
> already
> >> > done there.  We can then create individual JIRA issues for any
> changes that
> >> > are more involved than just deleting code.
> >> > >>
> >> > >> All assistance gratefully received, particularly for the Solr
> deprecations
> >> > where there’s a lot of code I’m unfamiliar with.
> >> > >>
> >> > >> On 8 Jan 2019, at 09:21, Alan Woodward 
> >> > wrote:
> >> > >>
> >> > >> I think the current plan is to do a 7.7 release at the same time
> as 8.0, to
> >> > handle any last-minute deprecations etc.  So let’s keep those jobs
> enabled
> >> > for now.
> >> > >>
> >> > >> On 8 Jan 2019, at 09:10, Uwe Schindler  wrote:
> >> > >>
> >> > >> Hi,
> >> > >>
> >> > >> I will start and add the branch_8x jobs to Jenkins once I have
> some time
> >> > later today.
> >> > >>
> >> > >> The question: How to proceed with branch_7x? Should we stop using
> it
> >> > and release 7.6.x only (so we would use branch_7_6 only for
> bugfixes), or
> >> > are we planning to one more Lucene/Solr 7.7? In the latter case I
> would keep
> >> > the jenkins jobs enabled for a while.
> >> > >>
> >> > >> Uwe
> >> > >>
> >> > >> -
> >> > >> Uwe Schindler
> >> > >> Achterdiek 19, D-28357 Bremen
> >> > >> http://www.thetaphi.de
> >> > >> eMail: u...@thetaphi.de
> >> > >>
> >> > >> From: Alan Woodward 
> >> > >> Sent: Monday, January 7, 2019 11:30 AM
> >> > >> To: dev@lucene.apache.org
> >> > >> Subject: Re: Lucene/Solr 8.0
> >> > >>
> >> > >> OK, Christmas caught up with me a bit… I’ve just created a branch
> for

Re: Lucene/Solr 8.0

2019-01-25 Thread Adrien Grand
Releasing a new major is very challenging on its own, I'd rather not
call it a blocker and delay the release for it since this isn't a new
regression in 8.0: it looks like a problem that has affected Solr
since at least 6.3? I'm not familiar with the UI code at all, but
maybe this is something that could get fixed before we build a RC?




On Fri, Jan 25, 2019 at 6:06 PM Gus Heck  wrote:
>
> I'd like to suggest that https://issues.apache.org/jira/browse/SOLR-10211 be 
> promoted to block 8.0. I just got burned by it a second time.
>
> On Thu, Jan 24, 2019 at 1:05 PM Uwe Schindler  wrote:
>>
>> Cool,
>>
>> I am working on giving my best release time guess as possible on the FOSDEM 
>> conference!
>>
>> Uwe
>>
>> -
>> Uwe Schindler
>> Achterdiek 19, D-28357 Bremen
>> http://www.thetaphi.de
>> eMail: u...@thetaphi.de
>>
>> > -Original Message-
>> > From: Adrien Grand 
>> > Sent: Thursday, January 24, 2019 5:33 PM
>> > To: Lucene Dev 
>> > Subject: Re: Lucene/Solr 8.0
>> >
>> > +1 to release 7.7 and 8.0 in a row starting on the week of February 4th.
>> >
>> > On Wed, Jan 23, 2019 at 4:23 PM jim ferenczi 
>> > wrote:
>> > >
>> > > Hi,
>> > > As we agreed some time ago I'd like to start on releasing 8.0. The 
>> > > branch is
>> > already created so we can start the process anytime now. Unless there are
>> > objections I'd like to start the feature freeze next week in order to 
>> > build the
>> > first candidate the week after.
>> > > We'll also need a 7.7 release but I think we can handle both with Alan so
>> > the question now is whether we are ok to start the release process or if 
>> > there
>> > are any blockers left ;).
>> > >
>> > >
>> > > Le mar. 15 janv. 2019 à 11:35, Alan Woodward 
>> > a écrit :
>> > >>
>> > >> I’ve started to work through the various deprecations on the new master
>> > branch.  There are a lot of them, and I’m going to need some assistance for
>> > several of them, as it’s not entirely clear what to do.
>> > >>
>> > >> I’ll open two overarching issues in JIRA, one for lucene and one for 
>> > >> Solr,
>> > with lists of the deprecations that need to be removed in each one.  I’ll 
>> > create
>> > a shared branch on gitbox to work against, and push the changes I’ve 
>> > already
>> > done there.  We can then create individual JIRA issues for any changes that
>> > are more involved than just deleting code.
>> > >>
>> > >> All assistance gratefully received, particularly for the Solr 
>> > >> deprecations
>> > where there’s a lot of code I’m unfamiliar with.
>> > >>
>> > >> On 8 Jan 2019, at 09:21, Alan Woodward 
>> > wrote:
>> > >>
>> > >> I think the current plan is to do a 7.7 release at the same time as 
>> > >> 8.0, to
>> > handle any last-minute deprecations etc.  So let’s keep those jobs enabled
>> > for now.
>> > >>
>> > >> On 8 Jan 2019, at 09:10, Uwe Schindler  wrote:
>> > >>
>> > >> Hi,
>> > >>
>> > >> I will start and add the branch_8x jobs to Jenkins once I have some time
>> > later today.
>> > >>
>> > >> The question: How to proceed with branch_7x? Should we stop using it
>> > and release 7.6.x only (so we would use branch_7_6 only for bugfixes), or
>> > are we planning to one more Lucene/Solr 7.7? In the latter case I would 
>> > keep
>> > the jenkins jobs enabled for a while.
>> > >>
>> > >> Uwe
>> > >>
>> > >> -
>> > >> Uwe Schindler
>> > >> Achterdiek 19, D-28357 Bremen
>> > >> http://www.thetaphi.de
>> > >> eMail: u...@thetaphi.de
>> > >>
>> > >> From: Alan Woodward 
>> > >> Sent: Monday, January 7, 2019 11:30 AM
>> > >> To: dev@lucene.apache.org
>> > >> Subject: Re: Lucene/Solr 8.0
>> > >>
>> > >> OK, Christmas caught up with me a bit… I’ve just created a branch for 8x
>> > from master, and am in the process of updating the master branch to version
>> > 9.  New commits that should be included in the 8.0 release should also be
>> > back-ported to branch_8x from master.
>> > >>
>> > >> This is not intended as a feature freeze, as I know there are still some
>> > things being worked on for 8.0; however, it should let us clean up master 
>> > by
>> > removing as much deprecated code as possible, and give us an idea of any
>> > replacement work that needs to be done.
>> > >>
>> > >>
>> > >> On 19 Dec 2018, at 15:13, David Smiley 
>> > wrote:
>> > >>
>> > >> January.
>> > >>
>> > >> On Wed, Dec 19, 2018 at 2:04 AM S G 
>> > wrote:
>> > >>
>> > >> It would be nice to see Solr 8 in January soon as there is an 
>> > >> enhancement
>> > on nested-documents we are waiting to get our hands on.
>> > >> Any idea when Solr 8 would be out ?
>> > >>
>> > >> Thx
>> > >> SG
>> > >>
>> > >> On Mon, Dec 17, 2018 at 1:34 PM David Smiley
>> >  wrote:
>> > >>
>> > >> I see 10 JIRA issues matching this filter:   project in (SOLR, LUCENE) 
>> > >> AND
>> > priority = Blocker and status = open and fixVersion = "master (8.0)"
>> > >>click here:
>> > >>
>> > https://issues.apache.org/jira/issues/?jql=project%20in%20(SOLR%2C%20LU
>> > CENE)%20AND%20priority%20%3D%20Blocker%20

[jira] [Created] (SOLR-13168) tlog replicas wait for sync on every commit when solr is run with java ssertions

2019-01-25 Thread Hoss Man (JIRA)
Hoss Man created SOLR-13168:
---

 Summary: tlog replicas wait for sync on every commit when solr is 
run with java ssertions
 Key: SOLR-13168
 URL: https://issues.apache.org/jira/browse/SOLR-13168
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man
Assignee: Hoss Man


Due to a bug in how {{TestInjection.waitForInSyncWithLeader}} was implemented, 
the test injection code can "leak" into non-test instances of solr in 
situations where java assertions were enabled at run time.

This results in tlog replicas stalling on commit commands, and waiting for the 
regular scheduled/timed replication to take place before allowing the commit to 
succeed -- meaning that the commit commands can time out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13168) tlog replicas wait for sync on every commit when solr is run with java assertions

2019-01-25 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-13168:

Summary: tlog replicas wait for sync on every commit when solr is run with 
java assertions  (was: tlog replicas wait for sync on every commit when solr is 
run with java ssertions)

> tlog replicas wait for sync on every commit when solr is run with java 
> assertions
> -
>
> Key: SOLR-13168
> URL: https://issues.apache.org/jira/browse/SOLR-13168
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> Due to a bug in how {{TestInjection.waitForInSyncWithLeader}} was 
> implemented, the test injection code can "leak" into non-test instances of 
> solr in situations where java assertions were enabled at run time.
> This results in tlog replicas stalling on commit commands, and waiting for 
> the regular scheduled/timed replication to take place before allowing the 
> commit to succeed -- meaning that the commit commands can time out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 440 - Still unstable

2019-01-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/440/

3 tests failed.
FAILED:  org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:41236/_iuk/bp/forceleader_test_collection

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: 
http://127.0.0.1:41236/_iuk/bp/forceleader_test_collection
at 
__randomizedtesting.SeedInfo.seed([5E2580F0459254C7:B8B2B4307C10ADA6]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:504)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:479)
at 
org.apache.solr.cloud.ForceLeaderTest.testReplicasInLIRNoLeader(ForceLeaderTest.java:294)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1075)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1047)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsR

[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752502#comment-16752502
 ] 

ASF subversion and git services commented on SOLR-12801:


Commit 0ba2233eea05a35ea9b25187c6a96fbdb865eaf3 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0ba2233 ]

SOLR-12801: completely prevent tlog replicas from being used.

This follows the spirit of the change Mark intended in his previous commit to 
this test, but his solution wasn't covering all cases on backcompat to branch_7x

(see also: SOLR-12313)

(cherry picked from commit e2b8b0e5b1f36e6ecedbeca50263cc6c263d7909)


> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_172) - Build # 974 - Unstable!

2019-01-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/974/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseG1GC

5 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test

Error Message:
Timed out waiting for replica core_node15 (1548437944358) to replicate from 
leader core_node4 (0)

Stack Trace:
java.lang.AssertionError: Timed out waiting for replica core_node15 
(1548437944358) to replicate from leader core_node4 (0)
at 
__randomizedtesting.SeedInfo.seed([D55201433738FF00:5D063EC492F8]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForReplicationFromReplicas(AbstractFullDistribZkTestBase.java:2247)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeWithPullReplicasTest.test(ChaosMonkeyNothingIsSafeWithPullReplicasTest.java:278)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1075)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1047)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
a

[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752500#comment-16752500
 ] 

ASF subversion and git services commented on SOLR-12801:


Commit afcc4fd5d3b80dfcb86ce64fc45315013ddb1d3e in lucene-solr's branch 
refs/heads/branch_7x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=afcc4fd ]

SOLR-12801: completely prevent tlog replicas from being used.

This follows the spirit of the change Mark intended in his previous commit to 
this test, but his solution wasn't covering all cases on backcompat to branch_7x

(see also: SOLR-12313)

(cherry picked from commit e2b8b0e5b1f36e6ecedbeca50263cc6c263d7909)


> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12801) Fix the tests, remove BadApples and AwaitsFix annotations, improve env for test development.

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752504#comment-16752504
 ] 

ASF subversion and git services commented on SOLR-12801:


Commit e2b8b0e5b1f36e6ecedbeca50263cc6c263d7909 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e2b8b0e ]

SOLR-12801: completely prevent tlog replicas from being used.

This follows the spirit of the change Mark intended in his previous commit to 
this test, but his solution wasn't covering all cases on backcompat to branch_7x

(see also: SOLR-12313)


> Fix the tests, remove BadApples and AwaitsFix annotations, improve env for 
> test development.
> 
>
> Key: SOLR-12801
> URL: https://issues.apache.org/jira/browse/SOLR-12801
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Critical
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> A single issue to counteract the single issue adding tons of annotations, the 
> continued addition of new flakey tests, and the continued addition of 
> flakiness to existing tests.
> Lots more to come.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12313) TestInjection#waitForInSyncWithLeader needs improvement.

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752503#comment-16752503
 ] 

ASF subversion and git services commented on SOLR-12313:


Commit 0ba2233eea05a35ea9b25187c6a96fbdb865eaf3 in lucene-solr's branch 
refs/heads/branch_8x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=0ba2233 ]

SOLR-12801: completely prevent tlog replicas from being used.

This follows the spirit of the change Mark intended in his previous commit to 
this test, but his solution wasn't covering all cases on backcompat to branch_7x

(see also: SOLR-12313)

(cherry picked from commit e2b8b0e5b1f36e6ecedbeca50263cc6c263d7909)


> TestInjection#waitForInSyncWithLeader needs improvement.
> 
>
> Key: SOLR-12313
> URL: https://issues.apache.org/jira/browse/SOLR-12313
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>
> This really should have some doc for why it would be used.
> I also think it causes BasicDistributedZkTest to take forever for sometimes 
> and perhaps other tests?
> I think checking for uncommitted data is probably a race condition and should 
> be removed.
> Checking index versions should follow the rules that replication does - if 
> the slave is higher than the leader, it's in sync, being equal is not 
> required. If it's expected for a test it should be a specific test that 
> fails. This just introduces massive delays.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12313) TestInjection#waitForInSyncWithLeader needs improvement.

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752501#comment-16752501
 ] 

ASF subversion and git services commented on SOLR-12313:


Commit afcc4fd5d3b80dfcb86ce64fc45315013ddb1d3e in lucene-solr's branch 
refs/heads/branch_7x from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=afcc4fd ]

SOLR-12801: completely prevent tlog replicas from being used.

This follows the spirit of the change Mark intended in his previous commit to 
this test, but his solution wasn't covering all cases on backcompat to branch_7x

(see also: SOLR-12313)

(cherry picked from commit e2b8b0e5b1f36e6ecedbeca50263cc6c263d7909)


> TestInjection#waitForInSyncWithLeader needs improvement.
> 
>
> Key: SOLR-12313
> URL: https://issues.apache.org/jira/browse/SOLR-12313
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>
> This really should have some doc for why it would be used.
> I also think it causes BasicDistributedZkTest to take forever for sometimes 
> and perhaps other tests?
> I think checking for uncommitted data is probably a race condition and should 
> be removed.
> Checking index versions should follow the rules that replication does - if 
> the slave is higher than the leader, it's in sync, being equal is not 
> required. If it's expected for a test it should be a specific test that 
> fails. This just introduces massive delays.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12313) TestInjection#waitForInSyncWithLeader needs improvement.

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752505#comment-16752505
 ] 

ASF subversion and git services commented on SOLR-12313:


Commit e2b8b0e5b1f36e6ecedbeca50263cc6c263d7909 in lucene-solr's branch 
refs/heads/master from Chris M. Hostetter
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e2b8b0e ]

SOLR-12801: completely prevent tlog replicas from being used.

This follows the spirit of the change Mark intended in his previous commit to 
this test, but his solution wasn't covering all cases on backcompat to branch_7x

(see also: SOLR-12313)


> TestInjection#waitForInSyncWithLeader needs improvement.
> 
>
> Key: SOLR-12313
> URL: https://issues.apache.org/jira/browse/SOLR-12313
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>
> This really should have some doc for why it would be used.
> I also think it causes BasicDistributedZkTest to take forever for sometimes 
> and perhaps other tests?
> I think checking for uncommitted data is probably a race condition and should 
> be removed.
> Checking index versions should follow the rules that replication does - if 
> the slave is higher than the leader, it's in sync, being equal is not 
> required. If it's expected for a test it should be a specific test that 
> fails. This just introduces massive delays.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8634) LatLonShape: Query with the same polygon that is indexed might not match

2019-01-25 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752482#comment-16752482
 ] 

Nicholas Knize commented on LUCENE-8634:


For clarity I'll separate the issues being discussed here:

1. Sub centimeter polygons:  I think we document the spatial resolution of the 
encoding fairly well. 1e-7 dec deg ~= 1.11cm. Any polygon defined with vertex 
distances <= 1e-7 dec deg (like the one in the example here) should not be 
expected to index with the same accuracy as provided. So subcentimeter polygons 
in the WGS84 lat/lon projection are not supported and may result in an 
unexpected invalid shape. This is why I opened LUCENE-8632 to lay groundwork 
for alternative projections. If a user wants to index subcentimeter shapes they 
should do so using the right spatial reference system for the job. 

2. "Should we keep lines and polygons in the encoded space like boxes?"

So I made a simple decision (occam's razor) when handling this in the first 
iteration of development. For point, line, and polygon query, rather than 
quantize the search shape in the query constructor (like BoundingBox does) I 
quantized the query shape in the test before invoking the query. Right or wrong 
I chose this route for two reasons: a. consistency with 
{{LatLonPointInPolygonQuery}} which we discussed this topic at length across 
several issues, and b. we have no formal support for the EqualTo relation 
operation, only INTERSECT, DISJOINT, WITHIN. In hindsight INTERSECT does fill 
this void so a false negative using INTERSECT query on an indexed shape that is 
equalTo the query shape could/should probably be considered a bug. Furthermore, 
{{LatLonPointInPolygonQuery}} doesn't have the complexities to contend with 
like relating lines and polygons. I think this change probably deserves a bit 
more thought / consideration. 


> LatLonShape: Query with the same polygon that is indexed might not match
> 
>
> Key: LUCENE-8634
> URL: https://issues.apache.org/jira/browse/LUCENE-8634
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/sandbox
>Affects Versions: 8.0, 7.7, master (9.0)
>Reporter: Ignacio Vera
>Priority: Major
> Attachments: LUCENE-8634.patch, LUCENE-8634.patch
>
>
> If a polygon with a degenerated dimension is indexed and then an intersect 
> query is performed with the same polygon, it might result in an empty result. 
> For example this polygon with degenerated longitude:
> POLYGON((1.401298464324817E-45 22.0, 1.401298464324817E-45 69.0, 
> 4.8202184588118395E-40 69.0, 4.8202184588118395E-40 22.0, 
> 1.401298464324817E-45 22.0))
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10211) Solr UI Repeats long requests

2019-01-25 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752480#comment-16752480
 ] 

Gus Heck commented on SOLR-10211:
-

I think this and SOLR-9818 need to be collapsed as duplicates. Maybe SOLR-9818 
should take precedence since it has more discussion?

> Solr UI Repeats long requests
> -
>
> Key: SOLR-10211
> URL: https://issues.apache.org/jira/browse/SOLR-10211
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.3
> Environment: Linux Debian7, Firefox 51.0.1
>Reporter: Tomasz Czarnecki
>Priority: Critical
>  Labels: blocker
> Fix For: 7.0
>
> Attachments: Connection to Solr lost message.png, Screen Shot 
> 2019-01-25 at 12.16.04 PM.png, repeated optimize requests.png
>
>
> I can observe the following behavior in the new UI:
> - If request takes to long
> - Pop up red "Connection to Solr lost" message.
> - Try to repeat last request every few seconds to recover.
> As this tactic may seem OK for real connection problems it may do a lot of 
> harm if request takes long because it's heavy.
> I have had two such scenarios.
> 1. Loading Schema (e.g. /#/collection1/schema). For big index this can 
> require a a lot of memory initially. And it can take 20-30 seconds. But if 
> such operation is repeated several times in a series is short time frame, 
> resource requirements add up and this results in OOM exception on server side.
> The workaround is to:
> - Try to load schema
> - If red red "Connection to Solr lost" message pops up close Solr UI browser 
> tab.
> - Wait a about minute for server to warm up.
> - Open a new Solr UI tab and load schema again, this time it works fast 
> enough, probably to next index update.
> 2. "optimize now" functionality we occasionally use. It can take a while for 
> some collections (~100GB, 10M docs). If such request is repeated over longer 
> period of time a whole Jetty thread pool can be exhausted leaving Solr 
> unresponsive to any requests.
> It's as easy as starting optimize and leaving your screen for 15 minutes with 
> "Connection to Solr lost" message present.
> Observed on our few Solr instances after migration from 5.3 to 6.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10211) Solr UI Repeats long requests

2019-01-25 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752474#comment-16752474
 ] 

Gus Heck commented on SOLR-10211:
-

Attached image of result from this bug

> Solr UI Repeats long requests
> -
>
> Key: SOLR-10211
> URL: https://issues.apache.org/jira/browse/SOLR-10211
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.3
> Environment: Linux Debian7, Firefox 51.0.1
>Reporter: Tomasz Czarnecki
>Priority: Critical
>  Labels: blocker
> Fix For: 7.0
>
> Attachments: Connection to Solr lost message.png, Screen Shot 
> 2019-01-25 at 12.16.04 PM.png, repeated optimize requests.png
>
>
> I can observe the following behavior in the new UI:
> - If request takes to long
> - Pop up red "Connection to Solr lost" message.
> - Try to repeat last request every few seconds to recover.
> As this tactic may seem OK for real connection problems it may do a lot of 
> harm if request takes long because it's heavy.
> I have had two such scenarios.
> 1. Loading Schema (e.g. /#/collection1/schema). For big index this can 
> require a a lot of memory initially. And it can take 20-30 seconds. But if 
> such operation is repeated several times in a series is short time frame, 
> resource requirements add up and this results in OOM exception on server side.
> The workaround is to:
> - Try to load schema
> - If red red "Connection to Solr lost" message pops up close Solr UI browser 
> tab.
> - Wait a about minute for server to warm up.
> - Open a new Solr UI tab and load schema again, this time it works fast 
> enough, probably to next index update.
> 2. "optimize now" functionality we occasionally use. It can take a while for 
> some collections (~100GB, 10M docs). If such request is repeated over longer 
> period of time a whole Jetty thread pool can be exhausted leaving Solr 
> unresponsive to any requests.
> It's as easy as starting optimize and leaving your screen for 15 minutes with 
> "Connection to Solr lost" message present.
> Observed on our few Solr instances after migration from 5.3 to 6.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10211) Solr UI Repeats long requests

2019-01-25 Thread Gus Heck (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-10211:

Attachment: Screen Shot 2019-01-25 at 12.16.04 PM.png

> Solr UI Repeats long requests
> -
>
> Key: SOLR-10211
> URL: https://issues.apache.org/jira/browse/SOLR-10211
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.3
> Environment: Linux Debian7, Firefox 51.0.1
>Reporter: Tomasz Czarnecki
>Priority: Critical
>  Labels: blocker
> Fix For: 7.0
>
> Attachments: Connection to Solr lost message.png, Screen Shot 
> 2019-01-25 at 12.16.04 PM.png, repeated optimize requests.png
>
>
> I can observe the following behavior in the new UI:
> - If request takes to long
> - Pop up red "Connection to Solr lost" message.
> - Try to repeat last request every few seconds to recover.
> As this tactic may seem OK for real connection problems it may do a lot of 
> harm if request takes long because it's heavy.
> I have had two such scenarios.
> 1. Loading Schema (e.g. /#/collection1/schema). For big index this can 
> require a a lot of memory initially. And it can take 20-30 seconds. But if 
> such operation is repeated several times in a series is short time frame, 
> resource requirements add up and this results in OOM exception on server side.
> The workaround is to:
> - Try to load schema
> - If red red "Connection to Solr lost" message pops up close Solr UI browser 
> tab.
> - Wait a about minute for server to warm up.
> - Open a new Solr UI tab and load schema again, this time it works fast 
> enough, probably to next index update.
> 2. "optimize now" functionality we occasionally use. It can take a while for 
> some collections (~100GB, 10M docs). If such request is repeated over longer 
> period of time a whole Jetty thread pool can be exhausted leaving Solr 
> unresponsive to any requests.
> It's as easy as starting optimize and leaving your screen for 15 minutes with 
> "Connection to Solr lost" message present.
> Observed on our few Solr instances after migration from 5.3 to 6.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10211) Solr UI Repeats long requests

2019-01-25 Thread Gus Heck (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-10211:

Summary: Solr UI Repeats long requests  (was: Solr UI crashes Solr server 
somtimes by repeating long requests)

> Solr UI Repeats long requests
> -
>
> Key: SOLR-10211
> URL: https://issues.apache.org/jira/browse/SOLR-10211
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.3
> Environment: Linux Debian7, Firefox 51.0.1
>Reporter: Tomasz Czarnecki
>Priority: Critical
>  Labels: blocker
> Fix For: 7.0
>
> Attachments: Connection to Solr lost message.png, repeated optimize 
> requests.png
>
>
> I can observe the following behavior in the new UI:
> - If request takes to long
> - Pop up red "Connection to Solr lost" message.
> - Try to repeat last request every few seconds to recover.
> As this tactic may seem OK for real connection problems it may do a lot of 
> harm if request takes long because it's heavy.
> I have had two such scenarios.
> 1. Loading Schema (e.g. /#/collection1/schema). For big index this can 
> require a a lot of memory initially. And it can take 20-30 seconds. But if 
> such operation is repeated several times in a series is short time frame, 
> resource requirements add up and this results in OOM exception on server side.
> The workaround is to:
> - Try to load schema
> - If red red "Connection to Solr lost" message pops up close Solr UI browser 
> tab.
> - Wait a about minute for server to warm up.
> - Open a new Solr UI tab and load schema again, this time it works fast 
> enough, probably to next index update.
> 2. "optimize now" functionality we occasionally use. It can take a while for 
> some collections (~100GB, 10M docs). If such request is repeated over longer 
> period of time a whole Jetty thread pool can be exhausted leaving Solr 
> unresponsive to any requests.
> It's as easy as starting optimize and leaving your screen for 15 minutes with 
> "Connection to Solr lost" message present.
> Observed on our few Solr instances after migration from 5.3 to 6.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 8.0

2019-01-25 Thread Gus Heck
I'd like to suggest that https://issues.apache.org/jira/browse/SOLR-10211
be promoted to block 8.0. I just got burned by it a second time.

On Thu, Jan 24, 2019 at 1:05 PM Uwe Schindler  wrote:

> Cool,
>
> I am working on giving my best release time guess as possible on the
> FOSDEM conference!
>
> Uwe
>
> -
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> http://www.thetaphi.de
> eMail: u...@thetaphi.de
>
> > -Original Message-
> > From: Adrien Grand 
> > Sent: Thursday, January 24, 2019 5:33 PM
> > To: Lucene Dev 
> > Subject: Re: Lucene/Solr 8.0
> >
> > +1 to release 7.7 and 8.0 in a row starting on the week of February 4th.
> >
> > On Wed, Jan 23, 2019 at 4:23 PM jim ferenczi 
> > wrote:
> > >
> > > Hi,
> > > As we agreed some time ago I'd like to start on releasing 8.0. The
> branch is
> > already created so we can start the process anytime now. Unless there are
> > objections I'd like to start the feature freeze next week in order to
> build the
> > first candidate the week after.
> > > We'll also need a 7.7 release but I think we can handle both with Alan
> so
> > the question now is whether we are ok to start the release process or if
> there
> > are any blockers left ;).
> > >
> > >
> > > Le mar. 15 janv. 2019 à 11:35, Alan Woodward 
> > a écrit :
> > >>
> > >> I’ve started to work through the various deprecations on the new
> master
> > branch.  There are a lot of them, and I’m going to need some assistance
> for
> > several of them, as it’s not entirely clear what to do.
> > >>
> > >> I’ll open two overarching issues in JIRA, one for lucene and one for
> Solr,
> > with lists of the deprecations that need to be removed in each one.
> I’ll create
> > a shared branch on gitbox to work against, and push the changes I’ve
> already
> > done there.  We can then create individual JIRA issues for any changes
> that
> > are more involved than just deleting code.
> > >>
> > >> All assistance gratefully received, particularly for the Solr
> deprecations
> > where there’s a lot of code I’m unfamiliar with.
> > >>
> > >> On 8 Jan 2019, at 09:21, Alan Woodward 
> > wrote:
> > >>
> > >> I think the current plan is to do a 7.7 release at the same time as
> 8.0, to
> > handle any last-minute deprecations etc.  So let’s keep those jobs
> enabled
> > for now.
> > >>
> > >> On 8 Jan 2019, at 09:10, Uwe Schindler  wrote:
> > >>
> > >> Hi,
> > >>
> > >> I will start and add the branch_8x jobs to Jenkins once I have some
> time
> > later today.
> > >>
> > >> The question: How to proceed with branch_7x? Should we stop using it
> > and release 7.6.x only (so we would use branch_7_6 only for bugfixes), or
> > are we planning to one more Lucene/Solr 7.7? In the latter case I would
> keep
> > the jenkins jobs enabled for a while.
> > >>
> > >> Uwe
> > >>
> > >> -
> > >> Uwe Schindler
> > >> Achterdiek 19, D-28357 Bremen
> > >> http://www.thetaphi.de
> > >> eMail: u...@thetaphi.de
> > >>
> > >> From: Alan Woodward 
> > >> Sent: Monday, January 7, 2019 11:30 AM
> > >> To: dev@lucene.apache.org
> > >> Subject: Re: Lucene/Solr 8.0
> > >>
> > >> OK, Christmas caught up with me a bit… I’ve just created a branch for
> 8x
> > from master, and am in the process of updating the master branch to
> version
> > 9.  New commits that should be included in the 8.0 release should also be
> > back-ported to branch_8x from master.
> > >>
> > >> This is not intended as a feature freeze, as I know there are still
> some
> > things being worked on for 8.0; however, it should let us clean up
> master by
> > removing as much deprecated code as possible, and give us an idea of any
> > replacement work that needs to be done.
> > >>
> > >>
> > >> On 19 Dec 2018, at 15:13, David Smiley 
> > wrote:
> > >>
> > >> January.
> > >>
> > >> On Wed, Dec 19, 2018 at 2:04 AM S G 
> > wrote:
> > >>
> > >> It would be nice to see Solr 8 in January soon as there is an
> enhancement
> > on nested-documents we are waiting to get our hands on.
> > >> Any idea when Solr 8 would be out ?
> > >>
> > >> Thx
> > >> SG
> > >>
> > >> On Mon, Dec 17, 2018 at 1:34 PM David Smiley
> >  wrote:
> > >>
> > >> I see 10 JIRA issues matching this filter:   project in (SOLR,
> LUCENE) AND
> > priority = Blocker and status = open and fixVersion = "master (8.0)"
> > >>click here:
> > >>
> > https://issues.apache.org/jira/issues/?jql=project%20in%20(SOLR%2C%20LU
> > CENE)%20AND%20priority%20%3D%20Blocker%20and%20status%20%3D%2
> > 0open%20and%20fixVersion%20%3D%20%22master%20(8.0)%22%20
> > >>
> > >> Thru the end of the month, I intend to work on those issues not yet
> > assigned.
> > >>
> > >> On Mon, Dec 17, 2018 at 4:51 AM Adrien Grand 
> > wrote:
> > >>
> > >> +1
> > >>
> > >> On Mon, Dec 17, 2018 at 10:38 AM Alan Woodward
> >  wrote:
> > >> >
> > >> > Hi all,
> > >> >
> > >> > Now that 7.6 is out of the door (thanks Nick!) we should think about
> > cutting the 8.0 branch and moving master to 9.0.  I’ll volunteer to
> create the
> > branch this week - say Wedne

[jira] [Comment Edited] (SOLR-10211) Solr UI crashes Solr server somtimes by repeating long requests

2019-01-25 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752438#comment-16752438
 ] 

Gus Heck edited comment on SOLR-10211 at 1/25/19 5:00 PM:
--

I just hit this again this time with the autoscaling suggestions UI (wound up 
with multiple copies of a core on a movereplica suggestion). It seems that all 
non idempotent UI commands are very dangerous to use because of this tendency 
to repeat timed out requests.


was (Author: gus_heck):
I just hit this again this time with the autoscaling suggestions UI. It seems 
that all non idempotent UI commands are very dangerous to use because of this 
tendency to repeat timed out requests.

> Solr UI crashes Solr server somtimes by repeating long requests
> ---
>
> Key: SOLR-10211
> URL: https://issues.apache.org/jira/browse/SOLR-10211
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.3
> Environment: Linux Debian7, Firefox 51.0.1
>Reporter: Tomasz Czarnecki
>Priority: Critical
>  Labels: blocker
> Fix For: 7.0
>
> Attachments: Connection to Solr lost message.png, repeated optimize 
> requests.png
>
>
> I can observe the following behavior in the new UI:
> - If request takes to long
> - Pop up red "Connection to Solr lost" message.
> - Try to repeat last request every few seconds to recover.
> As this tactic may seem OK for real connection problems it may do a lot of 
> harm if request takes long because it's heavy.
> I have had two such scenarios.
> 1. Loading Schema (e.g. /#/collection1/schema). For big index this can 
> require a a lot of memory initially. And it can take 20-30 seconds. But if 
> such operation is repeated several times in a series is short time frame, 
> resource requirements add up and this results in OOM exception on server side.
> The workaround is to:
> - Try to load schema
> - If red red "Connection to Solr lost" message pops up close Solr UI browser 
> tab.
> - Wait a about minute for server to warm up.
> - Open a new Solr UI tab and load schema again, this time it works fast 
> enough, probably to next index update.
> 2. "optimize now" functionality we occasionally use. It can take a while for 
> some collections (~100GB, 10M docs). If such request is repeated over longer 
> period of time a whole Jetty thread pool can be exhausted leaving Solr 
> unresponsive to any requests.
> It's as easy as starting optimize and leaving your screen for 15 minutes with 
> "Connection to Solr lost" message present.
> Observed on our few Solr instances after migration from 5.3 to 6.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10211) Solr UI crashes Solr server somtimes by repeating long requests

2019-01-25 Thread Gus Heck (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10211?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck updated SOLR-10211:

Priority: Critical  (was: Major)

> Solr UI crashes Solr server somtimes by repeating long requests
> ---
>
> Key: SOLR-10211
> URL: https://issues.apache.org/jira/browse/SOLR-10211
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.3
> Environment: Linux Debian7, Firefox 51.0.1
>Reporter: Tomasz Czarnecki
>Priority: Critical
>  Labels: blocker
> Fix For: 7.0
>
> Attachments: Connection to Solr lost message.png, repeated optimize 
> requests.png
>
>
> I can observe the following behavior in the new UI:
> - If request takes to long
> - Pop up red "Connection to Solr lost" message.
> - Try to repeat last request every few seconds to recover.
> As this tactic may seem OK for real connection problems it may do a lot of 
> harm if request takes long because it's heavy.
> I have had two such scenarios.
> 1. Loading Schema (e.g. /#/collection1/schema). For big index this can 
> require a a lot of memory initially. And it can take 20-30 seconds. But if 
> such operation is repeated several times in a series is short time frame, 
> resource requirements add up and this results in OOM exception on server side.
> The workaround is to:
> - Try to load schema
> - If red red "Connection to Solr lost" message pops up close Solr UI browser 
> tab.
> - Wait a about minute for server to warm up.
> - Open a new Solr UI tab and load schema again, this time it works fast 
> enough, probably to next index update.
> 2. "optimize now" functionality we occasionally use. It can take a while for 
> some collections (~100GB, 10M docs). If such request is repeated over longer 
> period of time a whole Jetty thread pool can be exhausted leaving Solr 
> unresponsive to any requests.
> It's as easy as starting optimize and leaving your screen for 15 minutes with 
> "Connection to Solr lost" message present.
> Observed on our few Solr instances after migration from 5.3 to 6.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10211) Solr UI crashes Solr server somtimes by repeating long requests

2019-01-25 Thread Gus Heck (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752438#comment-16752438
 ] 

Gus Heck commented on SOLR-10211:
-

I just hit this again this time with the autoscaling suggestions UI. It seems 
that all non idempotent UI commands are very dangerous to use because of this 
tendency to repeat timed out requests.

> Solr UI crashes Solr server somtimes by repeating long requests
> ---
>
> Key: SOLR-10211
> URL: https://issues.apache.org/jira/browse/SOLR-10211
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.3
> Environment: Linux Debian7, Firefox 51.0.1
>Reporter: Tomasz Czarnecki
>Priority: Major
>  Labels: blocker
> Fix For: 7.0
>
> Attachments: Connection to Solr lost message.png, repeated optimize 
> requests.png
>
>
> I can observe the following behavior in the new UI:
> - If request takes to long
> - Pop up red "Connection to Solr lost" message.
> - Try to repeat last request every few seconds to recover.
> As this tactic may seem OK for real connection problems it may do a lot of 
> harm if request takes long because it's heavy.
> I have had two such scenarios.
> 1. Loading Schema (e.g. /#/collection1/schema). For big index this can 
> require a a lot of memory initially. And it can take 20-30 seconds. But if 
> such operation is repeated several times in a series is short time frame, 
> resource requirements add up and this results in OOM exception on server side.
> The workaround is to:
> - Try to load schema
> - If red red "Connection to Solr lost" message pops up close Solr UI browser 
> tab.
> - Wait a about minute for server to warm up.
> - Open a new Solr UI tab and load schema again, this time it works fast 
> enough, probably to next index update.
> 2. "optimize now" functionality we occasionally use. It can take a while for 
> some collections (~100GB, 10M docs). If such request is repeated over longer 
> period of time a whole Jetty thread pool can be exhausted leaving Solr 
> unresponsive to any requests.
> It's as easy as starting optimize and leaving your screen for 15 minutes with 
> "Connection to Solr lost" message present.
> Observed on our few Solr instances after migration from 5.3 to 6.3.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13165) enabling docValues on a tdate field and searching on the field is very slow

2019-01-25 Thread Sheeba Dhanaraj (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752406#comment-16752406
 ] 

Sheeba Dhanaraj commented on SOLR-13165:


after adding multiValued=false to the above field definition search queries are 
faster now.. based on the documentation by default multiValued is false so not 
sure why it needs to be added explicitly

> enabling docValues on a tdate field and searching on the field is very slow
> ---
>
> Key: SOLR-13165
> URL: https://issues.apache.org/jira/browse/SOLR-13165
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Sheeba Dhanaraj
>Priority: Major
>
> when we enable docValues on a tdate field and search on the field response 
> time is very slow. when we remove docValues from the field performance is 
> significantly improved. Is this by design? should we not enable docValues for 
> tdate fields



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9515) Update to Hadoop 3

2019-01-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752374#comment-16752374
 ] 

Kevin Risden commented on SOLR-9515:


I'm hoping to take a crack at this and get it into Solr 8.x. I know I am a bit 
late to the 8.x party but Hadoop 3.x will be needed for full JDK 11 support 
with Solr on HDFS.

> Update to Hadoop 3
> --
>
> Key: SOLR-9515
> URL: https://issues.apache.org/jira/browse/SOLR-9515
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: SOLR-9515.patch
>
>
> Hadoop 3 is not out yet, but I'd like to iron out the upgrade to be prepared. 
> I'll start up a dev branch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8640) validate delimiters when parsing date ranges

2019-01-25 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev reassigned LUCENE-8640:


Assignee: Mikhail Khludnev

> validate delimiters when parsing date ranges
> 
>
> Key: LUCENE-8640
> URL: https://issues.apache.org/jira/browse/LUCENE-8640
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: LUCENE-8640.patch, LUCENE-8640.patch, LUCENE-8640.patch, 
> mypatch.patch
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> {{DateRangePrefixTree.parseCalendar()}} should validate delimiters to rejects 
> dates like {{2000-11T13}} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3157 - Unstable

2019-01-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3157/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing.testScaleUp

Error Message:
{numFound=98865,start=0,docs=[]} expected:<10> but was:<98865>

Stack Trace:
java.lang.AssertionError: {numFound=98865,start=0,docs=[]} 
expected:<10> but was:<98865>
at 
__randomizedtesting.SeedInfo.seed([D1E8CB01230D9AC2:F0B68DA32F234463]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing.testScaleUp(TestSimExtremeIndexing.java:135)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14680 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.sim.TestSimExtremeIndexing
   [junit4]   2> Creating dataDir: 
/home/jenki

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 3452 - Unstable!

2019-01-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/3452/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.AssignBackwardCompatibilityTest.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:46425/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:46425/solr
at 
__randomizedtesting.SeedInfo.seed([AF5CF57EE65435E7:2708CAA448A8581F]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:484)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:414)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1110)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AssignBackwardCompatibilityTest.test(AssignBackwardCompatibilityTest.java:87)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucen

Re: [DISCUSS] Opening old indices for reading

2019-01-25 Thread Michael McCandless
Another example is long ago Lucene allowed pos=-1 to be indexed and it
caused all sorts of problems.  We also stopped allowing positions close to
Integer.MAX_VALUE (https://issues.apache.org/jira/browse/LUCENE-6382).  Yet
another is allowing negative vInts which are possible but horribly
inefficient (https://issues.apache.org/jira/browse/LUCENE-3738).

We do need to be free to fix these problems and then know after N+2
releases that no index can have the issue.

I like the idea of providing "expert" / best effort / limited way of
carrying forward such ancient indices, but I think the huge challenge for
someone using that tool on an important index will be enumerating the list
of issues that might "matter" (the 3 Adrien listed + the 3 I listed above
is a start for this list) and taking appropriate steps to "correct" the
index if so.  E.g. on a norms encoding change, somehow these expert tools
must decode norms the old way, encode them the new way, and then rewrite
the norms files.  Or if the index has pos=-1, changing that to pos=0.  Or
if it has negative vInts, ... etc.

Or maybe the "special" DirectoryReader only reads stored fields?  And so
you would enumerate your _source and reindex into the latest format ...

> Something like https://issues.apache.org/jira/browse/LUCENE-8277 would
> help make it harder to introduce corrupt data in an index.

+1

Every time we catch something like "don't allow pos = -1 into the index" we
need somehow remember to go and add the check also in addIndices.

Mike McCandless

http://blog.mikemccandless.com


On Fri, Jan 25, 2019 at 3:52 AM Adrien Grand  wrote:

> Agreed with Michael that setting expectations is going to be
> important. The thing that I would like to make sure is that we would
> never refrain from moving Lucene forward because of this feature. In
> particular, lucene-core should be free to make assumptions that are
> valid for N and N-1 indices without worrying about the fact that we
> have this super-expert feature that allows opening older indices. Here
> are some assumptions that I have in mind which have not always been
> true:
>  - norms might be encoded in a different way (this changed in 7)
>  - all index files have a checksum (only true since Lucene 5)
>  - offsets are always going forward (only enforced since Lucene 7)
>
> This means that carrying indices over by just merging them with the
> new version to move them to a new codec won't work all the time. For
> instance if your index has backward offsets and new codecs assume that
> offsets are going forward, then merging might fail or corrupt offsets
> - I'd like to make sure that we would not consider this a bug.
>
> Erick, I don't think this feature would be suitable for "robust index
> upgrades". To me it is really a best effort and shouldn't be trusted
> too much.
>
> I think some users will be tempted to wrap old readers to make them
> look good and then add them back to an index using addIndexes?
> Something like https://issues.apache.org/jira/browse/LUCENE-8277 would
> help make it harder to introduce corrupt data in an index.
>
> On Wed, Jan 23, 2019 at 3:11 PM Simon Willnauer
>  wrote:
> >
> > Hey folks,
> >
> > tl;dr; I want to be able to open an indexreader on an old index if the
> > SegmentInfo version is supported and all segment codecs are available.
> > Today that's not possible even if I port old formats to current
> > versions.
> >
> > Our BWC policy for quite a while has been N-1 major versions. That's
> > good and I think we should keep it that way. Only recently, caused by
> > changes how we encode/decode norms we also hard-enforce a the
> > index-version-created in several places and the version a segment was
> > written with. These are great enforcements and I understand why. My
> > request here is if we can find consensus on allowing somehow (a
> > special DirectoryReader for instance) to open such an index for
> > reading only that doesn't provide the guarantees that our high level
> > APIs decode norms correctly for instance. This would be enough to for
> > instance consume stored fields etc. for reindexing or if a users are
> > aware do they norms decoding in the codec. I am happy to work on a
> > proposal how this would work. It would still enforce no writing or
> > anything like this. I am also all for putting such a reader into misc
> > and being experimental.
> >
> > simon
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Updated] (SOLR-13167) Duplicate Child Documents and undeterministic search

2019-01-25 Thread Kevin Bachmann (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Bachmann updated SOLR-13167:
--
Description: 
i have a product search hosted on a solr cloud with 2 shards and two instances 
hosted on ec2 and the following setup: 

a product has an unlimited amount of children which are small objects with shop 
information. these child documents of the products define the shops where the 
product is available. the requirement from my side is to update / sync the 
whole documents (parent and children) at least once a day. the availability 
information is included in the child-documents with a quantity field.

problem:
 # after every sync the number of child documents (shops) increases and nests 
deeper every sync as the quantity changes and the child documents are 
apparently not updated by id but newly created with the same id 
(document-duplicates as comparable in SOLR-5211, SOLR-6096, SOLR-12638). 
 # whenever i sync the products with the children with one level of depth 
(parent > child) i get parent > child > child > child > ... depending on how 
many children there are (see screenshot-4.png). these children also can't be 
displayed with nodeType:shop
 # whenever i try to request the products (parents) by a child attribute 
(shopId) the search is underteministic and does not return the correct 
products. a lot of products do contain children that never have been assigned 
to them. some products are flooded with a huuge amount of children (>1000) 
although they have assigned about 10. as you can see in screenshot-1 to 3 there 
are three queries that are exactly the same and give back different products. 
screenshot-1 with 26241 results would be the correct amount and correct data 
but the other two are completely wrong. 

i would really appreciate any workaround or help on these issues. this is a 
huge problem and my business does depend on this (!):(

 

  was:
i have a product search hosted on a solr cloud with 2 shards and two instances 
hosted on ec2 and the following setup: 

a product has an unlimited amount of children which are small objects with shop 
information. these child documents of the products define the shops where the 
product is available. the requirement from my side is to update / sync the 
whole documents (parent and children) at least once a day. the availability 
information is included in the child-documents with a quantity field.

problem:
 # after every sync the number of child documents (shops) increases and nests 
deeper every sync as the quantity changes and the child documents are 
apparently not updated by id but newly created with the same id (duplicates as 
comparable in SOLR-5211, SOLR-6096, SOLR-12638). 
 # whenever i sync the products with the children with one level of depth 
(parent > child) i get parent > child > child > child > ... depending on how 
many children there are (see screenshot-4.png). these children also can't be 
displayed with nodeType:shop
 # whenever i try to request the products (parents) by a child attribute 
(shopId) the search is underteministic and does not return the correct 
products. a lot of products do contain children that never have been assigned 
to them. some products are flooded with a huuge amount of children (>1000) 
although they have assigned about 10. as you can see in screenshot-1 to 3 there 
are three queries that are exactly the same and give back different products. 
screenshot-1 with 26241 results would be the correct amount and correct data 
but the other two are completely wrong. 

i would really appreciate any workaround or help on these issues. this is a 
huge problem and my business does depend on this (!):(

 


> Duplicate Child Documents and undeterministic search
> 
>
> Key: SOLR-13167
> URL: https://issues.apache.org/jira/browse/SOLR-13167
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, SolrCloud
>Affects Versions: 7.5
> Environment: SOLR 7.5 running on AWS EC2 Instances with an AMI OS 
> split to two shards running on two different EC2 instances with the built in 
> Zookeeper of SOLR
>Reporter: Kevin Bachmann
>Priority: Major
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png
>
>
> i have a product search hosted on a solr cloud with 2 shards and two 
> instances hosted on ec2 and the following setup: 
> a product has an unlimited amount of children which are small objects with 
> shop information. these child documents of the products define the shops 
> where the product is available. the requirement from my side is to update / 
> sync the whole documents (parent and children) at least once a day. the 
> availability information is i

Re: Facing an issue with secure solr 7.5 admin panel

2019-01-25 Thread Jan Høydahl
You should wait until Solr 7.7 is released (a few weeks), as it includes Admin 
panel login, see SOLR-7896
BTW: This type of question is not suitable for the developer list. Please post 
any follow-up to the solr-u...@lucene.apache.org mailing list.

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 25. jan. 2019 kl. 14:01 skrev Jasmin Mamtora :
> 
> Hi,
> 
> I want to implement the protected solr 7.5 admin panel with credentials.(i.e. 
> username and password) so that no one can access the admin panel.
> 
> Could you please guide me stepwise how to implement it for solr 7.5 admin 
> panel?
> 
> Thanks and Regards,
> Jasmin Mamtora



Facing an issue with secure solr 7.5 admin panel

2019-01-25 Thread Jasmin Mamtora
Hi,

I want to implement the protected solr 7.5 admin panel with
credentials.(i.e. username and password) so that no one can access the
admin panel.

Could you please guide me stepwise how to implement it for solr 7.5 admin
panel?

Thanks and Regards,
Jasmin Mamtora


[jira] [Created] (SOLR-13167) Duplicate Child Documents and undeterministic search

2019-01-25 Thread Kevin Bachmann (JIRA)
Kevin Bachmann created SOLR-13167:
-

 Summary: Duplicate Child Documents and undeterministic search
 Key: SOLR-13167
 URL: https://issues.apache.org/jira/browse/SOLR-13167
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: search, SolrCloud
Affects Versions: 7.5
 Environment: SOLR 7.5 running on AWS EC2 Instances with an AMI OS 
split to two shards running on two different EC2 instances with the built in 
Zookeeper of SOLR
Reporter: Kevin Bachmann
 Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
screenshot-4.png

i have a product search hosted on a solr cloud with 2 shards and two instances 
hosted on ec2 and the following setup: 

a product has an unlimited amount of children which are small objects with shop 
information. these child documents of the products define the shops where the 
product is available. the requirement from my side is to update / sync the 
whole documents (parent and children) at least once a day. the availability 
information is included in the child-documents with a quantity field.

problem:
 # after every sync the number of child documents (shops) increases and nests 
deeper every sync as the quantity changes and the child documents are 
apparently not updated by id but newly created with the same id (duplicates as 
comparable in SOLR-5211, SOLR-6096, SOLR-12638). 
 # whenever i sync the products with the children with one level of depth 
(parent > child) i get parent > child > child > child > ... depending on how 
many children there are (see screenshot-4.png). these children also can't be 
displayed with nodeType:shop
 # whenever i try to request the products (parents) by a child attribute 
(shopId) the search is underteministic and does not return the correct 
products. a lot of products do contain children that never have been assigned 
to them. some products are flooded with a huuge amount of children (>1000) 
although they have assigned about 10. as you can see in screenshot-1 to 3 there 
are three queries that are exactly the same and give back different products. 
screenshot-1 with 26241 results would be the correct amount and correct data 
but the other two are completely wrong. 

i would really appreciate any workaround or help on these issues. this is a 
huge problem and my business does depend on this (!):(

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1242 - Still Failing

2019-01-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1242/

No tests ran.

Build Log:
[...truncated 23456 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2478 links (2020 relative) to 3245 anchors in 248 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disal

[jira] [Commented] (LUCENE-8652) Add boosting support in the SynonymQuery

2019-01-25 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752159#comment-16752159
 ] 

Alan Woodward commented on LUCENE-8652:
---

+1 to the patch, and +1 to keeping the simple constructor.

> Add boosting support in the SynonymQuery
> 
>
> Key: LUCENE-8652
> URL: https://issues.apache.org/jira/browse/LUCENE-8652
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Minor
> Attachments: LUCENE-8652.patch
>
>
> The SynonymQuery tries to score multiple terms as if you had indexed them as 
> one term.
> This is good for "true" synonyms where each term should have the same 
> contribution to the final score but this doesn't handle the case where terms 
> have different weights. For scoring purpose it would be nice to be able to 
> assign a boost per term that we could multiply with the term's document 
> frequency in order to take into account the importance of the term within the 
> synonym list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-25 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752101#comment-16752101
 ] 

Amrit Sarkar commented on SOLR-13035:
-

Requesting final feedback on the above. I am not able to test windows script 
(solr.cmd), requesting if someone from the community can do sanity checking.

sample startup cmd;
{code}
bin/solr start -c -p 8983 -w data
{code}

Thanks.


> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8585) Create jump-tables for DocValues at index-time

2019-01-25 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8585?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752100#comment-16752100
 ] 

Adrien Grand commented on LUCENE-8585:
--

I just pushed the above patch.

> Create jump-tables for DocValues at index-time
> --
>
> Key: LUCENE-8585
> URL: https://issues.apache.org/jira/browse/LUCENE-8585
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/codecs
>Affects Versions: 8.0
>Reporter: Toke Eskildsen
>Priority: Minor
>  Labels: performance
> Attachments: LUCENE-8585.patch, LUCENE-8585.patch, 
> make_patch_lucene8585.sh
>
>  Time Spent: 10.5h
>  Remaining Estimate: 0h
>
> As noted in LUCENE-7589, lookup of DocValues should use jump-tables to avoid 
> long iterative walks. This is implemented in LUCENE-8374 at search-time 
> (first request for DocValues from a field in a segment), with the benefit of 
> working without changes to existing Lucene 7 indexes and the downside of 
> introducing a startup time penalty and a memory overhead.
> As discussed in LUCENE-8374, the codec should be updated to create these 
> jump-tables at index time. This eliminates the segment-open time & memory 
> penalties, with the potential downside of increasing index-time for DocValues.
> The three elements of LUCENE-8374 should be transferable to index-time 
> without much alteration of the core structures:
>  * {{IndexedDISI}} block offset and index skips: A {{long}} (64 bits) for 
> every 65536 documents, containing the offset of the block in 33 bits and the 
> index (number of set bits) up to the block in 31 bits.
>  It can be build sequentially and should be stored as a simple sequence of 
> consecutive longs for caching of lookups.
>  As it is fairly small, relative to document count, it might be better to 
> simply memory cache it?
>  * {{IndexedDISI}} DENSE (> 4095, < 65536 set bits) blocks: A {{short}} (16 
> bits) for every 8 {{longs}} (512 bits) for a total of 256 bytes/DENSE_block. 
> Each {{short}} represents the number of set bits up to right before the 
> corresponding sub-block of 512 docIDs.
>  The \{{shorts}} can be computed sequentially or when the DENSE block is 
> flushed (probably the easiest). They should be stored as a simple sequence of 
> consecutive shorts for caching of lookups, one logically independent sequence 
> for each DENSE block. The logical position would be one sequence at the start 
> of every DENSE block.
>  Whether it is best to read all the 16 {{shorts}} up front when a DENSE block 
> is accessed or whether it is best to only read any individual {{short}} when 
> needed is not clear at this point.
>  * Variable Bits Per Value: A {{long}} (64 bits) for every 16384 numeric 
> values. Each {{long}} holds the offset to the corresponding block of values.
>  The offsets can be computed sequentially and should be stored as a simple 
> sequence of consecutive {{longs}} for caching of lookups.
>  The vBPV-offsets has the largest space overhead og the 3 jump-tables and a 
> lot of the 64 bits in each long are not used for most indexes. They could be 
> represented as a simple {{PackedInts}} sequence or {{MonotonicLongValues}}, 
> with the downsides of a potential lookup-time overhead and the need for doing 
> the compression after all offsets has been determined.
> I have no experience with the codec-parts responsible for creating 
> index-structures. I'm quite willing to take a stab at this, although I 
> probably won't do much about it before January 2019. Should anyone else wish 
> to adopt this JIRA-issue or co-work on it, I'll be happy to share.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13029) Allow HDFS backup/restore buffer size to be configured

2019-01-25 Thread Tim Owen (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752109#comment-16752109
 ] 

Tim Owen commented on SOLR-13029:
-

Not sure - I can see someone might want parallelised file copies as well, so 
that ticket is still valid I think. It probably depends on how many collections 
you have to restore, if (like us) you have many collections to do, we just kick 
them off in parallel and let each one work through its files in series. But if 
you had 1 or 2 large collections it might be better done with the proposed 
change there.

> Allow HDFS backup/restore buffer size to be configured
> --
>
> Key: SOLR-13029
> URL: https://issues.apache.org/jira/browse/SOLR-13029
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, hdfs
>Affects Versions: 7.5, 8.0
>Reporter: Tim Owen
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.0, 7.7, master (9.0)
>
> Attachments: SOLR-13029.patch, SOLR-13029.patch, SOLR-13029.patch
>
>
> There's a default hardcoded buffer size setting of 4096 in the HDFS code 
> which means in particular that restoring a backup from HDFS takes a long 
> time. Copying multi-GB files from HDFS using a buffer as small as 4096 bytes 
> is very inefficient. We changed this in our local build used in production to 
> 256kB and saw a 10x speed improvement when restoring a backup. Attached patch 
> simply makes this size configurable using a command line setting, much like 
> several other buffer size values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8652) Add boosting support in the SynonymQuery

2019-01-25 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752108#comment-16752108
 ] 

Adrien Grand commented on LUCENE-8652:
--

Let's keep the constructor that takes a Term[] on branch_8x?

> Add boosting support in the SynonymQuery
> 
>
> Key: LUCENE-8652
> URL: https://issues.apache.org/jira/browse/LUCENE-8652
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Minor
> Attachments: LUCENE-8652.patch
>
>
> The SynonymQuery tries to score multiple terms as if you had indexed them as 
> one term.
> This is good for "true" synonyms where each term should have the same 
> contribution to the final score but this doesn't handle the case where terms 
> have different weights. For scoring purpose it would be nice to be able to 
> assign a boost per term that we could multiply with the term's document 
> frequency in order to take into account the importance of the term within the 
> synonym list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8652) Add boosting support in the SynonymQuery

2019-01-25 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752105#comment-16752105
 ] 

Adrien Grand commented on LUCENE-8652:
--

+1 the approach makes sense to me

I'm wondering whether we could wrap impacts of the boosted synonyms to scale 
term frequencies based on the boost? (as a follow-up, this patch is already a 
great start in my opinion)

> Add boosting support in the SynonymQuery
> 
>
> Key: LUCENE-8652
> URL: https://issues.apache.org/jira/browse/LUCENE-8652
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Jim Ferenczi
>Priority: Minor
> Attachments: LUCENE-8652.patch
>
>
> The SynonymQuery tries to score multiple terms as if you had indexed them as 
> one term.
> This is good for "true" synonyms where each term should have the same 
> contribution to the final score but this doesn't handle the case where terms 
> have different weights. For scoring purpose it would be nice to be able to 
> assign a boost per term that we could multiply with the term's document 
> frequency in order to take into account the importance of the term within the 
> synonym list.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13029) Allow HDFS backup/restore buffer size to be configured

2019-01-25 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752082#comment-16752082
 ] 

Mikhail Khludnev commented on SOLR-13029:
-

[~TimOwen] does it mean SOLR-9961 is unlocked? 

> Allow HDFS backup/restore buffer size to be configured
> --
>
> Key: SOLR-13029
> URL: https://issues.apache.org/jira/browse/SOLR-13029
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, hdfs
>Affects Versions: 7.5, 8.0
>Reporter: Tim Owen
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.0, 7.7, master (9.0)
>
> Attachments: SOLR-13029.patch, SOLR-13029.patch, SOLR-13029.patch
>
>
> There's a default hardcoded buffer size setting of 4096 in the HDFS code 
> which means in particular that restoring a backup from HDFS takes a long 
> time. Copying multi-GB files from HDFS using a buffer as small as 4096 bytes 
> is very inefficient. We changed this in our local build used in production to 
> 256kB and saw a 10x speed improvement when restoring a backup. Attached patch 
> simply makes this size configurable using a command line setting, much like 
> several other buffer size values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8651) Tokenizer implementations can't be reset

2019-01-25 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752080#comment-16752080
 ] 

Alan Woodward commented on LUCENE-8651:
---

It's an interesting question.  Tokenizers generally should only be instantiated 
as part of Analyzer.createComponents(), as that's where things like setReader() 
are handled - a consumer of a TokenStream shouldn't need to worry about that at 
all.  I think some clarifying documentation on Tokenizer is a good idea - 
please feel free to put up a patch!

> Tokenizer implementations can't be reset
> 
>
> Key: LUCENE-8651
> URL: https://issues.apache.org/jira/browse/LUCENE-8651
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Dan Meehl
>Priority: Major
> Attachments: LUCENE-8650-2.patch, LUCENE-8651.patch, LUCENE-8651.patch
>
>
> The fine print here is that they can't be reset without calling setReader() 
> every time before reset() is called. The reason for this is that Tokenizer 
> violates the contract put forth by TokenStream.reset() which is the following:
> "Resets this stream to a clean state. Stateful implementations must implement 
> this method so that they can be reused, just as if they had been created 
> fresh."
> Tokenizer implementation's reset function can't reset in that manner because 
> their Tokenizer.close() removes the reference to the underlying Reader 
> because of LUCENE-2387. The catch-22 here is that we don't want to 
> unnecessarily keep around a Reader (memory leak) but we would like to be able 
> to reset() if necessary.
> The patches include an integration test that attempts to use a 
> ConcatenatingTokenStream to join an input TokenStream with a KeywordTokenizer 
> TokenStream. This test fails with an IllegalStateException thrown by 
> Tokenizer.ILLEGAL_STATE_READER.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13029) Allow HDFS backup/restore buffer size to be configured

2019-01-25 Thread Tim Owen (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752062#comment-16752062
 ] 

Tim Owen commented on SOLR-13029:
-

Thanks Mikhail!

> Allow HDFS backup/restore buffer size to be configured
> --
>
> Key: SOLR-13029
> URL: https://issues.apache.org/jira/browse/SOLR-13029
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, hdfs
>Affects Versions: 7.5, 8.0
>Reporter: Tim Owen
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 8.0, 7.7, master (9.0)
>
> Attachments: SOLR-13029.patch, SOLR-13029.patch, SOLR-13029.patch
>
>
> There's a default hardcoded buffer size setting of 4096 in the HDFS code 
> which means in particular that restoring a backup from HDFS takes a long 
> time. Copying multi-GB files from HDFS using a buffer as small as 4096 bytes 
> is very inefficient. We changed this in our local build used in production to 
> 256kB and saw a 10x speed improvement when restoring a backup. Attached patch 
> simply makes this size configurable using a command line setting, much like 
> several other buffer size values.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8657) CharsRef.compareTo() should always be in UTF-8 order

2019-01-25 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752059#comment-16752059
 ] 

Adrien Grand commented on LUCENE-8657:
--

It might be a bit surprising that CharsRef doesn't not compare like String. I'm 
wondering whether we should just un-deprecate this comparator?

> CharsRef.compareTo() should always be in UTF-8 order
> 
>
> Key: LUCENE-8657
> URL: https://issues.apache.org/jira/browse/LUCENE-8657
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.0
>
> Attachments: LUCENE-8657.patch
>
>
> CharsRef.compareTo() currently directly compares byte values.  However, 
> everywhere that CharsRef objects are compared in the codebase instead uses 
> the deprecated UTF16SortedAsUTF8Comparator static comparator.  We should just 
> reimplement compareTo() to use UTF-8 comparisons instead, and remove the 
> deprecated methods.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13166) Add smart checks for Config and Schema API in Solr to avoid malicious updates

2019-01-25 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13166:

Attachment: SOLR-13166.patch

> Add smart checks for Config and Schema API in Solr to avoid malicious updates
> -
>
> Key: SOLR-13166
> URL: https://issues.apache.org/jira/browse/SOLR-13166
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13166.patch
>
>
> While working with Solr, schema and configuration changes without 
> understanding can result in severe node failures, and much effort and time 
> get consumed to fix such situations.
> Few such problematic situations can be:
> * Too many fields in the schema
> * Too many commits: too short auto commit
> * Spellchecker, suggester issues. Build suggester index on startup or on 
> every commit causes memory pressure and latency issues
> -- Schema mess-ups
> * Text field commented out and Solr refuses to reload core
> * Rename field type for unique key or version field
> * Single-valued to multivalued and vice versa
> * Switching between docvalues on/off
> * Changing text to string type because user wanted to facet on a text field
> The intention is to add a layer above Schema and Config API to have some 
> checks and let the end user know the ramifications of the changes he/she 
> intends to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13166) Add smart checks for Config and Schema API in Solr to avoid malicious updates

2019-01-25 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752042#comment-16752042
 ] 

Amrit Sarkar commented on SOLR-13166:
-

Attaching a patch with the following design:

1. SchemaChecksManager: does few hard-coded checks, along with changing 
docValues, indexed, multiValued etc while some documents are already indexed. 
The checks may and may not apply, but an error will be thrown with the user 
with helping/justifying message.
2. SolrConfigChecksManger: does few hard-coded checks for autoCommits and cache 
sizes.

To bypass such checks and execute the command anyway use inline parameter 
*{{force=true}}*.
e.g.
{code}
curl http://localhost:8983/solr/wiki/config?force=true -H 
'Content-type:application/json' -d'
{
  "set-property": {
"updateHandler.autoCommit.maxTime":15000,
"updateHandler.autoCommit.openSearcher":false
  }
}'
{code}
{code}
curl -X POST -H 'Content-type:application/json' --data-binary '{
  "replace-field":{
 "name":"id",
 "type":"text_general",
 "stored":false }
}' http://localhost:8983/solr/wiki/schema?force=true
{code}

Requesting feedbacks, any other way we can tackle this issue etc.

> Add smart checks for Config and Schema API in Solr to avoid malicious updates
> -
>
> Key: SOLR-13166
> URL: https://issues.apache.org/jira/browse/SOLR-13166
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis
>Reporter: Amrit Sarkar
>Priority: Major
>
> While working with Solr, schema and configuration changes without 
> understanding can result in severe node failures, and much effort and time 
> get consumed to fix such situations.
> Few such problematic situations can be:
> * Too many fields in the schema
> * Too many commits: too short auto commit
> * Spellchecker, suggester issues. Build suggester index on startup or on 
> every commit causes memory pressure and latency issues
> -- Schema mess-ups
> * Text field commented out and Solr refuses to reload core
> * Rename field type for unique key or version field
> * Single-valued to multivalued and vice versa
> * Switching between docvalues on/off
> * Changing text to string type because user wanted to facet on a text field
> The intention is to add a layer above Schema and Config API to have some 
> checks and let the end user know the ramifications of the changes he/she 
> intends to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8658) Illegal assertion in WANDScorer

2019-01-25 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-8658.
--
   Resolution: Fixed
Fix Version/s: master (9.0)
   8.0

> Illegal assertion in WANDScorer
> ---
>
> Key: LUCENE-8658
> URL: https://issues.apache.org/jira/browse/LUCENE-8658
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: 8.0, master (9.0)
>
> Attachments: LUCENE-8658.patch
>
>
> [~jim.ferenczi] told me about an assertion error that he ran into while 
> playing with WANDScorer.
> WANDScorer tries to not have to deal with accuracy issues on floating-point 
> numbers. In order to do this, it turns all scores into integers by 
> multiplying them by a scaling factor, and then rounding minimum competitive 
> scores down and rounding maximum scores up. This scaling factor is computed 
> in the constructor in such a way that scores end up in the 0..65536 range. 
> Sub scorers that have a maximum score of +Infty are ignored.
> The assertion is triggered in the rare case that a Scorer returns +Infty for 
> its maximum score when computing the scaling factor but then returns finite 
> values that are greater than the maximum scores of other clauses when asked 
> for the maximum score over smaller ranges of doc ids.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Opening old indices for reading

2019-01-25 Thread Adrien Grand
Agreed with Michael that setting expectations is going to be
important. The thing that I would like to make sure is that we would
never refrain from moving Lucene forward because of this feature. In
particular, lucene-core should be free to make assumptions that are
valid for N and N-1 indices without worrying about the fact that we
have this super-expert feature that allows opening older indices. Here
are some assumptions that I have in mind which have not always been
true:
 - norms might be encoded in a different way (this changed in 7)
 - all index files have a checksum (only true since Lucene 5)
 - offsets are always going forward (only enforced since Lucene 7)

This means that carrying indices over by just merging them with the
new version to move them to a new codec won't work all the time. For
instance if your index has backward offsets and new codecs assume that
offsets are going forward, then merging might fail or corrupt offsets
- I'd like to make sure that we would not consider this a bug.

Erick, I don't think this feature would be suitable for "robust index
upgrades". To me it is really a best effort and shouldn't be trusted
too much.

I think some users will be tempted to wrap old readers to make them
look good and then add them back to an index using addIndexes?
Something like https://issues.apache.org/jira/browse/LUCENE-8277 would
help make it harder to introduce corrupt data in an index.

On Wed, Jan 23, 2019 at 3:11 PM Simon Willnauer
 wrote:
>
> Hey folks,
>
> tl;dr; I want to be able to open an indexreader on an old index if the
> SegmentInfo version is supported and all segment codecs are available.
> Today that's not possible even if I port old formats to current
> versions.
>
> Our BWC policy for quite a while has been N-1 major versions. That's
> good and I think we should keep it that way. Only recently, caused by
> changes how we encode/decode norms we also hard-enforce a the
> index-version-created in several places and the version a segment was
> written with. These are great enforcements and I understand why. My
> request here is if we can find consensus on allowing somehow (a
> special DirectoryReader for instance) to open such an index for
> reading only that doesn't provide the guarantees that our high level
> APIs decode norms correctly for instance. This would be enough to for
> instance consume stored fields etc. for reindexing or if a users are
> aware do they norms decoding in the codec. I am happy to work on a
> proposal how this would work. It would still enforce no writing or
> anything like this. I am also all for putting such a reader into misc
> and being experimental.
>
> simon
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>


-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13166) Add smart checks for Config and Schema API in Solr to avoid malicious updates

2019-01-25 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13166:

Description: 
While working with Solr, schema and configuration changes without understanding 
can result in severe node failures, and much effort and time get consumed to 
fix such situations.

Few such problematic situations can be:

* Too many fields in the schema
* Too many commits: too short auto commit
* Spellchecker, suggester issues. Build suggester index on startup or on every 
commit causes memory pressure and latency issues
-- Schema mess-ups
* Text field commented out and Solr refuses to reload core
* Rename field type for unique key or version field
* Single-valued to multivalued and vice versa
* Switching between docvalues on/off
* Changing text to string type because user wanted to facet on a text field

The intention is to add a layer above Schema and Config API to have some checks 
and let the end user know the ramifications of the changes he/she intends to do.


  was:
While working with Solr, schema and configuration changes without understanding 
can result in severe node failures, and much effort and time get consumed to 
fix such situations.

Few such problematic situations can be:

* Too many fields in the schema
* Too many commits: too short auto commit
* Too high cache sizes set which bloats heap memory
* Spellchecker, suggester issues. Build suggester index on startup or on every 
commit causes memory pressure and latency issues
-- Schema mess-ups
* Text field commented out and Solr refuses to reload core
* Rename field type for unique key or version field
* Single-valued to multivalued and vice versa
* Switching between docvalues on/off
* Copy/pasting from old solr schema examples and trying them on new versions
* Changing text to string type because user wanted to facet on a text field
* CDCR: if user forgets turning off buffer and the target goes down, the tlog 
accumulates until node runs out of disk space or has huge recovery time.

The intention is to add a layer above Schema and Config API to have some checks 
and let the end user know the ramifications of the changes he/she intends to do.



> Add smart checks for Config and Schema API in Solr to avoid malicious updates
> -
>
> Key: SOLR-13166
> URL: https://issues.apache.org/jira/browse/SOLR-13166
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis
>Reporter: Amrit Sarkar
>Priority: Major
>
> While working with Solr, schema and configuration changes without 
> understanding can result in severe node failures, and much effort and time 
> get consumed to fix such situations.
> Few such problematic situations can be:
> * Too many fields in the schema
> * Too many commits: too short auto commit
> * Spellchecker, suggester issues. Build suggester index on startup or on 
> every commit causes memory pressure and latency issues
> -- Schema mess-ups
> * Text field commented out and Solr refuses to reload core
> * Rename field type for unique key or version field
> * Single-valued to multivalued and vice versa
> * Switching between docvalues on/off
> * Changing text to string type because user wanted to facet on a text field
> The intention is to add a layer above Schema and Config API to have some 
> checks and let the end user know the ramifications of the changes he/she 
> intends to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13166) Add smart checks for Config and Schema API in Solr to avoid malicious updates

2019-01-25 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-13166:
---

 Summary: Add smart checks for Config and Schema API in Solr to 
avoid malicious updates
 Key: SOLR-13166
 URL: https://issues.apache.org/jira/browse/SOLR-13166
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: config-api, Schema and Analysis
Reporter: Amrit Sarkar


While working with Solr, schema and configuration changes without understanding 
can result in severe node failures, and much effort and time get consumed to 
fix such situations.

Few such problematic situations can be:

* Too many fields in the schema
* Too many commits: too short auto commit
* Too high cache sizes set which bloats heap memory
* Spellchecker, suggester issues. Build suggester index on startup or on every 
commit causes memory pressure and latency issues
-- Schema mess-ups
* Text field commented out and Solr refuses to reload core
* Rename field type for unique key or version field
* Single-valued to multivalued and vice versa
* Switching between docvalues on/off
* Copy/pasting from old solr schema examples and trying them on new versions
* Changing text to string type because user wanted to facet on a text field
* CDCR: if user forgets turning off buffer and the target goes down, the tlog 
accumulates until node runs out of disk space or has huge recovery time.

The intention is to add a layer above Schema and Config API to have some checks 
and let the end user know the ramifications of the changes he/she intends to do.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk1.8.0) - Build # 23 - Still Unstable!

2019-01-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/23/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest

Error Message:
ObjectTracker found 6 object(s) that were not released!!! [InternalHttpClient, 
SolrCore, MMapDirectory, MMapDirectory, MMapDirectory, MMapDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:321)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:330)
  at 
org.apache.solr.handler.IndexFetcher.createHttpClient(IndexFetcher.java:225)  
at org.apache.solr.handler.IndexFetcher.(IndexFetcher.java:267)  at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:420) 
 at org.apache.solr.cloud.RecoveryStrategy.replicate(RecoveryStrategy.java:237) 
 at 
org.apache.solr.cloud.RecoveryStrategy.doReplicateOnlyRecovery(RecoveryStrategy.java:382)
  at 
org.apache.solr.cloud.RecoveryStrategy.doRecovery(RecoveryStrategy.java:328)  
at org.apache.solr.cloud.RecoveryStrategy.run(RecoveryStrategy.java:307)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
  at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)  
at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
 at java.lang.Thread.run(Thread.java:748)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1056)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:876)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1189)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1099)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:395)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:180)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
  at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:736)  
at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:717)  
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:394)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:340)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:164)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1610)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) 
 at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:703)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) 
 at org.eclipse.jetty.server.Server.handle(Server.java:502)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.suc

[jira] [Commented] (LUCENE-8658) Illegal assertion in WANDScorer

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752015#comment-16752015
 ] 

ASF subversion and git services commented on LUCENE-8658:
-

Commit ef47582fd5fcf0f444a925106b7ea354f8edbcfc in lucene-solr's branch 
refs/heads/master from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ef47582 ]

LUCENE-8658: Fix illegal assertion in WANDScorer.


> Illegal assertion in WANDScorer
> ---
>
> Key: LUCENE-8658
> URL: https://issues.apache.org/jira/browse/LUCENE-8658
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8658.patch
>
>
> [~jim.ferenczi] told me about an assertion error that he ran into while 
> playing with WANDScorer.
> WANDScorer tries to not have to deal with accuracy issues on floating-point 
> numbers. In order to do this, it turns all scores into integers by 
> multiplying them by a scaling factor, and then rounding minimum competitive 
> scores down and rounding maximum scores up. This scaling factor is computed 
> in the constructor in such a way that scores end up in the 0..65536 range. 
> Sub scorers that have a maximum score of +Infty are ignored.
> The assertion is triggered in the rare case that a Scorer returns +Infty for 
> its maximum score when computing the scaling factor but then returns finite 
> values that are greater than the maximum scores of other clauses when asked 
> for the maximum score over smaller ranges of doc ids.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8658) Illegal assertion in WANDScorer

2019-01-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752014#comment-16752014
 ] 

ASF subversion and git services commented on LUCENE-8658:
-

Commit 5286439fb93cd88ad79181983202c7ac3cff6711 in lucene-solr's branch 
refs/heads/branch_8x from Adrien Grand
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5286439 ]

LUCENE-8658: Fix illegal assertion in WANDScorer.


> Illegal assertion in WANDScorer
> ---
>
> Key: LUCENE-8658
> URL: https://issues.apache.org/jira/browse/LUCENE-8658
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8658.patch
>
>
> [~jim.ferenczi] told me about an assertion error that he ran into while 
> playing with WANDScorer.
> WANDScorer tries to not have to deal with accuracy issues on floating-point 
> numbers. In order to do this, it turns all scores into integers by 
> multiplying them by a scaling factor, and then rounding minimum competitive 
> scores down and rounding maximum scores up. This scaling factor is computed 
> in the constructor in such a way that scores end up in the 0..65536 range. 
> Sub scorers that have a maximum score of +Infty are ignored.
> The assertion is triggered in the rare case that a Scorer returns +Infty for 
> its maximum score when computing the scaling factor but then returns finite 
> values that are greater than the maximum scores of other clauses when asked 
> for the maximum score over smaller ranges of doc ids.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org