[jira] [Commented] (SOLR-11913) SolrParams ought to implement Iterable<Map.Entry<String,String[]>>

2018-03-16 Thread Tapan Vaishnav (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403273#comment-16403273
 ] 

Tapan Vaishnav commented on SOLR-11913:
---

[~dsmiley] [~gus_heck]
I have added the patch. Please have a look at it and let me know your feedback.

> SolrParams ought to implement Iterable>
> --
>
> Key: SOLR-11913
> URL: https://issues.apache.org/jira/browse/SOLR-11913
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11913.patch
>
>
> SolrJ ought to implement {{Iterable>}} so that 
> it's easier to iterate on it, either using Java 5 for-each style, or Java 8 
> streams.  The implementation on ModifiableSolrParams can delegate through to 
> the underlying LinkedHashMap entry set.  The default impl can produce a 
> Map.Entry with a getValue that calls through to getParams.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11913) SolrParams ought to implement Iterable<Map.Entry<String,String[]>>

2018-03-16 Thread Tapan Vaishnav (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tapan Vaishnav updated SOLR-11913:
--
Attachment: SOLR-11913.patch

> SolrParams ought to implement Iterable>
> --
>
> Key: SOLR-11913
> URL: https://issues.apache.org/jira/browse/SOLR-11913
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11913.patch
>
>
> SolrJ ought to implement {{Iterable>}} so that 
> it's easier to iterate on it, either using Java 5 for-each style, or Java 8 
> streams.  The implementation on ModifiableSolrParams can delegate through to 
> the underlying LinkedHashMap entry set.  The default impl can produce a 
> Map.Entry with a getValue that calls through to getParams.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_162) - Build # 21653 - Unstable!

2018-03-16 Thread Policeman Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 79, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-repro - Build # 278 - Still Unstable

2018-03-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/278/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/3/consoleText

[repro] Revision: 2ca741d36a3078e7d7b03cb73176a1e99377eefc

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=TestReplicationHandler 
-Dtests.method=doTestIndexFetchOnMasterRestart -Dtests.seed=759681D182A67553 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=mt-MT -Dtests.timezone=America/Danmarkshavn 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=AtomicUpdateProcessorFactoryTest 
-Dtests.method=testMultipleThreads -Dtests.seed=759681D182A67553 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=zh-SG -Dtests.timezone=America/Port_of_Spain 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.seed=759681D182A67553 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=zh-CN -Dtests.timezone=GB-Eire -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
be8dca3c7bc064bc42662cb3fa6eb7439ffc7fdb
[repro] git fetch
[repro] git checkout 2ca741d36a3078e7d7b03cb73176a1e99377eefc

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro]   TestReplicationHandler
[repro]   TestLargeCluster
[repro] ant compile-test

[...truncated 3292 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest|*.TestReplicationHandler|*.TestLargeCluster"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=759681D182A67553 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=zh-SG -Dtests.timezone=America/Port_of_Spain 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 8787 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.handler.TestReplicationHandler
[repro]   0/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster
[repro] git checkout be8dca3c7bc064bc42662cb3fa6eb7439ffc7fdb

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11331) Ability to Debug Solr With Eclipse IDE

2018-03-16 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403200#comment-16403200
 ] 

Varun Thacker commented on SOLR-11331:
--

Hi Uwe,
{quote}bq.  [~varunthacker]: Should I commit this and push?
{quote}
Sounds good to me! The only thing missing in the latest patch is your name in 
the CHANGES entry :)

I tested this out on a clean build and was able to run both the launch 
configurations. 

> Ability to Debug Solr With Eclipse IDE
> --
>
> Key: SOLR-11331
> URL: https://issues.apache.org/jira/browse/SOLR-11331
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.6.2
>Reporter: Karthik Ramachandran
>Assignee: Varun Thacker
>Priority: Minor
>  Labels: eclipse
> Attachments: SOLR-11331.diff, SOLR-11331.patch, SOLR-11331.patch, 
> SOLR-11331.patch, SOLR-11331.patch, SOLR-11331.patch, UI.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Ability to Debug Solr With Eclipse IDE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.3

2018-03-16 Thread Varun Thacker
I was going through the blockers for 7.3 and only SOLR-12070 came up. Is
the fix complete for this Andrzej?

@Alan : When do you plan on cutting an RC ? I committed SOLR-12083
yesterday and SOLR-12063 today to master/branch_7x. Both are important
fixes for CDCR so if you are okay I can backport it to the release branch

On Fri, Mar 16, 2018 at 4:58 PM, Đạt Cao Mạnh 
wrote:

> Hi guys, Alan
>
> I committed the fix for SOLR-12110 to branch_7_3
>
> Thanks!
>
> On Fri, Mar 16, 2018 at 5:43 PM Đạt Cao Mạnh 
> wrote:
>
>> Hi Alan,
>>
>> Sure the issue is marked as Blocker for 7.3.
>>
>> On Fri, Mar 16, 2018 at 3:12 PM Alan Woodward 
>> wrote:
>>
>>> Thanks Đạt, could you mark the issue as a Blocker and let me know when
>>> it’s been resolved?
>>>
>>> On 16 Mar 2018, at 02:05, Đạt Cao Mạnh  wrote:
>>>
>>> Hi guys, Alan,
>>>
>>> I found a blocker issue SOLR-12110, when investigating test failure.
>>> I've already uploaded a patch and beasting the tests, if the result is good
>>> I will commit soon.
>>>
>>> Thanks!
>>>
>>> On Tue, Mar 13, 2018 at 7:49 PM Alan Woodward 
>>> wrote:
>>>
 Just realised that I don’t have an ASF Jenkins account - Uwe or Steve,
 can you give me a hand setting up the 7.3 Jenkins jobs?

 Thanks, Alan


 On 12 Mar 2018, at 09:32, Alan Woodward  wrote:

 I’ve created the 7.3 release branch.  I’ll leave 24 hours for bug-fixes
 and doc patches and then create a release candidate.

 We’re now in feature-freeze for 7.3, so please bear in mind the
 following:

- No new features may be committed to the branch.
- Documentation patches, build patches and serious bug fixes may be
committed to the branch. However, you should submit *all* patches
you want to commit to Jira first to give others the chance to review and
possibly vote against the patch. Keep in mind that it is our main 
 intention
to keep the branch as stable as possible.
- All patches that are intended for the branch should first be
committed to the unstable branch, merged into the stable branch, and 
 then
into the current release branch.
- Normal unstable and stable branch development may continue as
usual. However, if you plan to commit a big change to the unstable 
 branch
while the branch feature freeze is in effect, think twice: can't the
addition wait a couple more days? Merges of bug fixes into the branch 
 may
become more difficult.
- *Only* Jira issues with Fix version “7.3" and priority "Blocker"
will delay a release candidate build.



 On 9 Mar 2018, at 16:43, Alan Woodward  wrote:

 FYI I’m still recovering from my travels, so I’m going to create the
 release branch on Monday instead.

 On 27 Feb 2018, at 18:51, Cassandra Targett 
 wrote:

 I intend to create the Ref Guide RC as soon as the Lucene/Solr
 artifacts RC is ready, so this is a great time to remind folks that if
 you've got Ref Guide changes to be done, you've got a couple weeks. If
 you're stuck or not sure what to do, let me know & I'm happy to help you
 out.

 Eventually we'd like to release both the Ref Guide and Lucene/Solr with
 the same release process, so this will be a big first test to see how ready
 for that we are.

 On Tue, Feb 27, 2018 at 11:42 AM, Michael McCandless <
 luc...@mikemccandless.com> wrote:

> +1
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
> On Fri, Feb 23, 2018 at 4:50 AM, Alan Woodward <
> alan.woodw...@romseysoftware.co.uk> wrote:
>
>> Hi all,
>>
>> It’s been a couple of months since the 7.2 release, and we’ve
>> accumulated some nice new features since then.  I’d like to volunteer to 
>> be
>> RM for a 7.3 release.
>>
>> I’m travelling for the next couple of weeks, so I would plan to
>> create the release branch two weeks today, on the 9th March (unless 
>> anybody
>> else wants to do it sooner, of course :)
>>
>> - Alan
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>




>>>


[jira] [Commented] (SOLR-8014) Replace langdetect lib by more updated fork

2018-03-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403196#comment-16403196
 ] 

Steve Rowe commented on SOLR-8014:
--

bq. Would OpenNLP be a better option? They recently released a trained model 
that supports 103 languages.

This has already been implemented: SOLR-11592, which will be included in 
soon-to-be-released Solr 7.3.

> Replace langdetect lib by more updated fork
> ---
>
> Key: SOLR-8014
> URL: https://issues.apache.org/jira/browse/SOLR-8014
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - LangId
>Reporter: Jan Høydahl
>Priority: Major
>
> The language-detection library we use is 
> https://code.google.com/p/language-detection/ version 1.1 from 2012. The 
> project has stalled with no new development, not even in the [github 
> repo](https://github.com/shuyo/language-detection) the original author put up.
> Looks like the most promising fork is this one 
> https://github.com/optimaize/language-detector/ which is also being selected 
> by the Tika project to replace Tika's old detector.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8014) Replace langdetect lib by more updated fork

2018-03-16 Thread Ryan Pedela (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403190#comment-16403190
 ] 

Ryan Pedela commented on SOLR-8014:
---

Would OpenNLP be a better option? They recently released a [trained 
model|https://opennlp.apache.org/models.html] that supports 103 languages.

> Replace langdetect lib by more updated fork
> ---
>
> Key: SOLR-8014
> URL: https://issues.apache.org/jira/browse/SOLR-8014
> Project: Solr
>  Issue Type: Improvement
>  Components: contrib - LangId
>Reporter: Jan Høydahl
>Priority: Major
>
> The language-detection library we use is 
> https://code.google.com/p/language-detection/ version 1.1 from 2012. The 
> project has stalled with no new development, not even in the [github 
> repo](https://github.com/shuyo/language-detection) the original author put up.
> Looks like the most promising fork is this one 
> https://github.com/optimaize/language-detector/ which is also being selected 
> by the Tika project to replace Tika's old detector.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+43) - Build # 1545 - Unstable!

2018-03-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1545/
Java: 64bit/jdk-10-ea+43 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.SSLMigrationTest.test

Error Message:
Replica didn't have the proper urlScheme in the ClusterState

Stack Trace:
java.lang.AssertionError: Replica didn't have the proper urlScheme in the 
ClusterState
at 
__randomizedtesting.SeedInfo.seed([FB65C2A906A49FDF:7331FD73A858F227]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.SSLMigrationTest.assertReplicaInformation(SSLMigrationTest.java:103)
at 
org.apache.solr.cloud.SSLMigrationTest.testMigrateSSL(SSLMigrationTest.java:96)
at org.apache.solr.cloud.SSLMigrationTest.test(SSLMigrationTest.java:60)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Comment Edited] (SOLR-10912) Adding automatic patch validation

2018-03-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403172#comment-16403172
 ] 

Steve Rowe edited comment on SOLR-10912 at 3/17/18 12:54 AM:
-

{quote}
bq. 6. Request ASF Infrastructure to add LUCENE and SOLR to the list of 
projects that use the PreCommit-Admin Jenkins job to enqueue precommit runs for 
new patches on LUCENE/SOLR JIRAs with the "Patch Available" state. (I'll make a 
JIRA for this and link it to this issue.)
Done: INFRA-16194
{quote}

This is now completed.

The {{PreCommit-Admin}} job is scheduled to run every 10 minutes (can stretch 
to 40 minutes or longer though, depending on executor availability), and in the 
first runs after INFRA-16194 was done, two Lucene/Solr qualifying issues (i.e. 
with "Patch Available" status and updated some time in the last 2 weeks) were 
submitted: LUCENE-8197 and SOLR-11331.  Unfortunately, I had not properly 
configured the auth token on the {{PreCommit-\{LUCENE,SOLR\}-Build}} jobs -- 
{{PreCommit-Admin}} always supplies token 'hadoopqa' when it triggers all 
{{PreCommit-\*}} jobs, and I had configured the jobs to expect 'lucenesolrqa'; 
I've since fixed this -- and as a result the builds didn't kick off, but 
{{PreCommit-Admin}}'s database of submitted patches now includes the 
attachments that were submitted as already dealt with, so those patches won't 
be validated until somebody uploads new patches there.

I re-opened and switched status to "Patch Available" on the two test issues I 
created to manually test the new {{PreCommit}} jobs (LUCENE-8210 and 
SOLR-12106). {{PreCommit-Admin}} has now run again and has queued the 
corresponding {{PreCommit}} jobs to validate the patches on those two issues 
(once they run the results will be available at 
[https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-LUCENE-Build/10/]
 and 
[https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-SOLR-Build/6/]).


was (Author: steve_rowe):
{quote}
bq. 6. Request ASF Infrastructure to add LUCENE and SOLR to the list of 
projects that use the PreCommit-Admin Jenkins job to enqueue precommit runs for 
new patches on LUCENE/SOLR JIRAs with the "Patch Available" state. (I'll make a 
JIRA for this and link it to this issue.)
Done: INFRA-16194
{quote}

This is now completed.

The PreCommit-Admin is scheduled to run every 10 minutes (can stretch to 40 
minutes depending on executor availability though), and in the first runs after 
INFRA-16194 was done, two Lucene/Solr issues qualified ("Patch Available" 
status and updated some time in the last 2 weeks) were submitted: LUCENE-8197 
and SOLR-11331.  Unfortunately, I had not properly configured the auth token on 
the {{PreCommit-\{LUCENE,SOLR\}-Build}} jobs -- {{PreCommit-Admin}} always 
supplies token 'hadoopqa' when it triggers all {{PreCommit-\*}} jobs, and I had 
configured the jobs to expect 'lucenesolrqa'; I've since fixed this -- and as a 
result the builds didn't kick off, but {{PreCommit-Admin}}'s database of 
submitted patches now includes the attachments that were submitted as already 
dealt with, so those patches won't be validated until somebody uploads new 
patches there.

I re-opened and switched status to "Patch Available" on the two test issues I 
created to manually test the new {{PreCommit}} jobs (LUCENE-8210 and 
SOLR-12106). {{PreCommit-Admin}} has now run again and has queued the 
corresponding {{PreCommit}} jobs to validate the patches on those two issues 
(once they run the results will be available at 
[https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-LUCENE-Build/10/]
 and 
[https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-SOLR-Build/6/]).

> Adding automatic patch validation
> -
>
> Key: SOLR-10912
> URL: https://issues.apache.org/jira/browse/SOLR-10912
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mano Kovacs
>Assignee: Steve Rowe
>Priority: Major
> Attachments: SOLR-10912.ok-patch-in-core.patch, SOLR-10912.patch, 
> SOLR-10912.patch, SOLR-10912.sample-patch.patch, 
> SOLR-10912.solj-contrib-facet-error.patch
>
>
> Proposing introduction of automated patch validation, similar what Hadoop or 
> other Apache projects are using (see link). This would ensure that every 
> patch passes a certain set of criterions before getting approved. It would 
> save time for developer (faster feedback loop), save time for committers 
> (less step to do manually), and would increase quality.
> Hadoop is currently using Apache Yetus to run validations, which seems to be 
> a good direction to start. This jira could be the board of discussing the 
> preferred solution.



--
This message was sent 

[jira] [Commented] (SOLR-10912) Adding automatic patch validation

2018-03-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403172#comment-16403172
 ] 

Steve Rowe commented on SOLR-10912:
---

{quote}
bq. 6. Request ASF Infrastructure to add LUCENE and SOLR to the list of 
projects that use the PreCommit-Admin Jenkins job to enqueue precommit runs for 
new patches on LUCENE/SOLR JIRAs with the "Patch Available" state. (I'll make a 
JIRA for this and link it to this issue.)
Done: INFRA-16194
{quote}

This is now completed.

The PreCommit-Admin is scheduled to run every 10 minutes (can stretch to 40 
minutes depending on executor availability though), and in the first runs after 
INFRA-16194 was done, two Lucene/Solr issues qualified ("Patch Available" 
status and updated some time in the last 2 weeks) were submitted: LUCENE-8197 
and SOLR-11331.  Unfortunately, I had not properly configured the auth token on 
the {{PreCommit-\{LUCENE,SOLR\}-Build}} jobs -- {{PreCommit-Admin}} always 
supplies token 'hadoopqa' when it triggers all {{PreCommit-\*}} jobs, and I had 
configured the jobs to expect 'lucenesolrqa'; I've since fixed this -- and as a 
result the builds didn't kick off, but {{PreCommit-Admin}}'s database of 
submitted patches now includes the attachments that were submitted as already 
dealt with, so those patches won't be validated until somebody uploads new 
patches there.

I re-opened and switched status to "Patch Available" on the two test issues I 
created to manually test the new {{PreCommit}} jobs (LUCENE-8210 and 
SOLR-12106). {{PreCommit-Admin}} has now run again and has queued the 
corresponding {{PreCommit}} jobs to validate the patches on those two issues 
(once they run the results will be available at 
[https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-LUCENE-Build/10/]
 and 
[https://builds.apache.org/view/PreCommit%20Builds/job/PreCommit-SOLR-Build/6/]).

> Adding automatic patch validation
> -
>
> Key: SOLR-10912
> URL: https://issues.apache.org/jira/browse/SOLR-10912
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mano Kovacs
>Assignee: Steve Rowe
>Priority: Major
> Attachments: SOLR-10912.ok-patch-in-core.patch, SOLR-10912.patch, 
> SOLR-10912.patch, SOLR-10912.sample-patch.patch, 
> SOLR-10912.solj-contrib-facet-error.patch
>
>
> Proposing introduction of automated patch validation, similar what Hadoop or 
> other Apache projects are using (see link). This would ensure that every 
> patch passes a certain set of criterions before getting approved. It would 
> save time for developer (faster feedback loop), save time for committers 
> (less step to do manually), and would increase quality.
> Hadoop is currently using Apache Yetus to run validations, which seems to be 
> a good direction to start. This jira could be the board of discussing the 
> preferred solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 13 - Still Unstable

2018-03-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/13/

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.sim.TestLargeCluster

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 1) Thread[id=26466, 
name=AutoscalingActionExecutor-7414-thread-1, state=RUNNABLE, 
group=TGRP-TestLargeCluster] at 
java.util.stream.Sink$ChainedReference.end(Sink.java:258) at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482) 
at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471) 
at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)  
   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)   
  at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getMatrixCopy(Policy.java:300)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.copy(Policy.java:289)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Row.addReplica(Row.java:122) 
at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.tryEachNode(MoveReplicaSuggester.java:59)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.init(MoveReplicaSuggester.java:34)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:129)
 at 
org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:98)
 at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:307)
 at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$$Lambda$645/355615214.run(Unknown
 Source) at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
 at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$118/1804694261.run(Unknown
 Source) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 
   1) Thread[id=26466, name=AutoscalingActionExecutor-7414-thread-1, 
state=RUNNABLE, group=TGRP-TestLargeCluster]
at java.util.stream.Sink$ChainedReference.end(Sink.java:258)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getMatrixCopy(Policy.java:300)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.copy(Policy.java:289)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Row.addReplica(Row.java:122)
at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.tryEachNode(MoveReplicaSuggester.java:59)
at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.init(MoveReplicaSuggester.java:34)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:129)
at 
org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:98)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:307)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$$Lambda$645/355615214.run(Unknown
 Source)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$118/1804694261.run(Unknown
 Source)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([7AB7A0FEDD558F]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.sim.TestLargeCluster

Error Message:

[jira] [Commented] (SOLR-12059) Unable to rename solr.xml

2018-03-16 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403170#comment-16403170
 ] 

Shawn Heisey commented on SOLR-12059:
-

If I clone the master branch from git and then run "git grep -c solr\.xml" I 
get this output:

{noformat}
solr/CHANGES.txt:62
solr/bin/install_solr_service.sh:3
solr/bin/solr:3
solr/bin/solr.cmd:3
solr/bin/solr.in.cmd:1
solr/bin/solr.in.sh:1
solr/cloud-dev/solrcloud-start.sh:1
solr/contrib/clustering/src/test-files/clustering/solr/solr.xml:1
solr/contrib/dataimporthandler/src/test-files/dih/solr/solr.xml:1
solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestContentStreamDataSource.java:2
solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSolrEntityProcessorEndToEnd.java:2
solr/core/src/java/org/apache/solr/cloud/ZkCLI.java:4
solr/core/src/java/org/apache/solr/cloud/ZkController.java:3
solr/core/src/java/org/apache/solr/core/SolrConfig.java:1
solr/core/src/java/org/apache/solr/core/SolrCores.java:1
solr/core/src/java/org/apache/solr/core/SolrXmlConfig.java:9
solr/core/src/java/org/apache/solr/core/TransientSolrCoreCache.java:1
solr/core/src/java/org/apache/solr/core/TransientSolrCoreCacheDefault.java:2
solr/core/src/java/org/apache/solr/core/TransientSolrCoreCacheFactory.java:1
solr/core/src/java/org/apache/solr/servlet/SolrDispatchFilter.java:6
solr/core/src/java/org/apache/solr/util/SolrCLI.java:2
solr/core/src/test-files/solr/solr-shardhandler.xml:1
solr/core/src/test-files/solr/solr-solrDataHome.xml:1
solr/core/src/test-files/solr/solr-trackingshardhandler.xml:1
solr/core/src/test/org/apache/solr/SolrTestCaseJ4Test.java:1
solr/core/src/test/org/apache/solr/TestSolrCoreProperties.java:1
solr/core/src/test/org/apache/solr/backcompat/TestLuceneIndexBackCompat.java:1
solr/core/src/test/org/apache/solr/client/solrj/embedded/TestJettySolrRunner.java:1
solr/core/src/test/org/apache/solr/cloud/DeleteNodeTest.java:1
solr/core/src/test/org/apache/solr/cloud/MoveReplicaTest.java:1
solr/core/src/test/org/apache/solr/cloud/ReplaceNodeNoTargetTest.java:1
solr/core/src/test/org/apache/solr/cloud/ReplaceNodeTest.java:1
solr/core/src/test/org/apache/solr/cloud/SharedFSAutoReplicaFailoverTest.java:1
solr/core/src/test/org/apache/solr/cloud/SolrXmlInZkTest.java:5
solr/core/src/test/org/apache/solr/cloud/TestPrepRecovery.java:1
solr/core/src/test/org/apache/solr/cloud/TestUtilizeNode.java:1
solr/core/src/test/org/apache/solr/cloud/UnloadDistributedZkTest.java:1
solr/core/src/test/org/apache/solr/cloud/ZkCLITest.java:3
solr/core/src/test/org/apache/solr/cloud/api/collections/AbstractCloudBackupRestoreTestCase.java:2
solr/core/src/test/org/apache/solr/cloud/autoscaling/AutoAddReplicasIntegrationTest.java:1
solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsNNFailoverTest.java:1
solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsWriteToMultipleCollectionsTest.java:1
solr/core/src/test/org/apache/solr/cloud/hdfs/StressHdfsTest.java:1
solr/core/src/test/org/apache/solr/core/DirectoryFactoryTest.java:1
solr/core/src/test/org/apache/solr/core/OpenCloseCoreStressTest.java:1
solr/core/src/test/org/apache/solr/core/TestCoreDiscovery.java:4
solr/core/src/test/org/apache/solr/core/TestJmxIntegration.java:1
solr/core/src/test/org/apache/solr/core/TestLazyCores.java:2
solr/core/src/test/org/apache/solr/core/TestSolrXml.java:11
solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java:1
solr/core/src/test/org/apache/solr/handler/TestReplicationHandlerBackup.java:1
solr/core/src/test/org/apache/solr/handler/TestRestoreCore.java:1
solr/core/src/test/org/apache/solr/handler/V2StandaloneTest.java:1
solr/core/src/test/org/apache/solr/metrics/JvmMetricsTest.java:1
solr/core/src/test/org/apache/solr/metrics/reporters/solr/SolrCloudReportersTest.java:1
solr/core/src/test/org/apache/solr/schema/ChangedSchemaMergeTest.java:1
solr/core/src/test/org/apache/solr/schema/TestBinaryField.java:1
solr/core/src/test/org/apache/solr/security/BasicAuthStandaloneTest.java:2
solr/core/src/test/org/apache/solr/security/hadoop/TestZkAclsWithHadoopAuth.java:1
solr/core/src/test/org/apache/solr/util/TestSolrCLIRunExample.java:1
solr/server/README.txt:1
solr/server/solr/README.txt:6
solr/server/solr/solr.xml:1
solr/solr-ref-guide/src/authentication-and-authorization-plugins.adoc:1
solr/solr-ref-guide/src/config-sets.adoc:1
solr/solr-ref-guide/src/coreadmin-api.adoc:4
solr/solr-ref-guide/src/datadir-and-directoryfactory-in-solrconfig.adoc:1
solr/solr-ref-guide/src/distributed-search-with-index-sharding.adoc:4
solr/solr-ref-guide/src/format-of-solr-xml.adoc:14
solr/solr-ref-guide/src/getting-started-with-solrcloud.adoc:3
solr/solr-ref-guide/src/index.adoc:1
solr/solr-ref-guide/src/major-changes-in-solr-7.adoc:2
solr/solr-ref-guide/src/making-and-restoring-backups.adoc:2
solr/solr-ref-guide/src/metrics-reporting.adoc:8
solr/solr-ref-guide/src/parameter-reference.adoc:4

[jira] [Commented] (LUCENE-8210) Validate patch validation

2018-03-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403161#comment-16403161
 ] 

Steve Rowe commented on LUCENE-8210:


Reopening/setting "Patch Available" status to trigger PreCommit-Admin to 
trigger a build of PreCommit-LUCENE-Build to validate the patch.

> Validate patch validation 
> --
>
> Key: LUCENE-8210
> URL: https://issues.apache.org/jira/browse/LUCENE-8210
> Project: Lucene - Core
>  Issue Type: Test
>  Components: general/test
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: LUCENE-8210.patch
>
>
> Issue to host patches for testing automatic patch validation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12106) Validate patch validation (solr edition)

2018-03-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403158#comment-16403158
 ] 

Steve Rowe commented on SOLR-12106:
---

Reopening/setting "Patch Available" status to trigger PreCommit-Admin to 
trigger a build of PreCommit-Solr-Build to validate the patch.

> Validate patch validation (solr edition)
> 
>
> Key: SOLR-12106
> URL: https://issues.apache.org/jira/browse/SOLR-12106
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Trivial
> Attachments: SOLR-12106.patch
>
>
> Issue to host patches for testing automatic patch validation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12106) Validate patch validation (solr edition)

2018-03-16 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403158#comment-16403158
 ] 

Steve Rowe edited comment on SOLR-12106 at 3/17/18 12:31 AM:
-

Reopening/setting "Patch Available" status to trigger PreCommit-Admin to 
trigger a build of PreCommit-SOLR-Build to validate the patch.


was (Author: steve_rowe):
Reopening/setting "Patch Available" status to trigger PreCommit-Admin to 
trigger a build of PreCommit-Solr-Build to validate the patch.

> Validate patch validation (solr edition)
> 
>
> Key: SOLR-12106
> URL: https://issues.apache.org/jira/browse/SOLR-12106
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Trivial
> Attachments: SOLR-12106.patch
>
>
> Issue to host patches for testing automatic patch validation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-8210) Validate patch validation

2018-03-16 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8210?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reopened LUCENE-8210:


> Validate patch validation 
> --
>
> Key: LUCENE-8210
> URL: https://issues.apache.org/jira/browse/LUCENE-8210
> Project: Lucene - Core
>  Issue Type: Test
>  Components: general/test
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: LUCENE-8210.patch
>
>
> Issue to host patches for testing automatic patch validation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-12106) Validate patch validation (solr edition)

2018-03-16 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reopened SOLR-12106:
---

> Validate patch validation (solr edition)
> 
>
> Key: SOLR-12106
> URL: https://issues.apache.org/jira/browse/SOLR-12106
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Trivial
> Attachments: SOLR-12106.patch
>
>
> Issue to host patches for testing automatic patch validation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 277 - Still Unstable

2018-03-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/277/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/15/consoleText

[repro] Revision: 1b38998379d32f9f217bf4ed640dd279f7c6237b

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.seed=5858DEE56A6058B5 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=hr-HR -Dtests.timezone=Turkey 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestLTRReRankingPipeline 
-Dtests.method=testDifferentTopN -Dtests.seed=1045053CB9E07063 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ja-JP -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
be8dca3c7bc064bc42662cb3fa6eb7439ffc7fdb
[repro] git fetch
[repro] git checkout 1b38998379d32f9f217bf4ed640dd279f7c6237b

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/contrib/ltr
[repro]   TestLTRReRankingPipeline
[repro]solr/core
[repro]   TestLargeCluster
[repro] ant compile-test

[...truncated 2579 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestLTRReRankingPipeline" -Dtests.showOutput=onerror  
-Dtests.seed=1045053CB9E07063 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ja-JP -Dtests.timezone=PRT 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 140 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 1331 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestLargeCluster" -Dtests.showOutput=onerror  
-Dtests.seed=5858DEE56A6058B5 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=hr-HR -Dtests.timezone=Turkey 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 13039 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.cloud.autoscaling.sim.TestLargeCluster
[repro]   5/5 failed: org.apache.solr.ltr.TestLTRReRankingPipeline

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/contrib/ltr
[repro]   TestLTRReRankingPipeline
[repro] ant compile-test

[...truncated 2579 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestLTRReRankingPipeline" -Dtests.showOutput=onerror  
-Dtests.seed=1045053CB9E07063 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ja-JP -Dtests.timezone=PRT 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 139 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   5/5 failed: org.apache.solr.ltr.TestLTRReRankingPipeline

[repro] Re-testing 100% failures at the tip of branch_7x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/contrib/ltr
[repro]   TestLTRReRankingPipeline
[repro] ant compile-test

[...truncated 2579 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestLTRReRankingPipeline" -Dtests.showOutput=onerror  
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ja-JP -Dtests.timezone=PRT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 133 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x without a seed:
[repro]   5/5 failed: org.apache.solr.ltr.TestLTRReRankingPipeline
[repro] git checkout be8dca3c7bc064bc42662cb3fa6eb7439ffc7fdb

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re: Lucene/Solr 7.3

2018-03-16 Thread Đạt Cao Mạnh
Hi guys, Alan

I committed the fix for SOLR-12110 to branch_7_3

Thanks!

On Fri, Mar 16, 2018 at 5:43 PM Đạt Cao Mạnh 
wrote:

> Hi Alan,
>
> Sure the issue is marked as Blocker for 7.3.
>
> On Fri, Mar 16, 2018 at 3:12 PM Alan Woodward 
> wrote:
>
>> Thanks Đạt, could you mark the issue as a Blocker and let me know when
>> it’s been resolved?
>>
>> On 16 Mar 2018, at 02:05, Đạt Cao Mạnh  wrote:
>>
>> Hi guys, Alan,
>>
>> I found a blocker issue SOLR-12110, when investigating test failure. I've
>> already uploaded a patch and beasting the tests, if the result is good I
>> will commit soon.
>>
>> Thanks!
>>
>> On Tue, Mar 13, 2018 at 7:49 PM Alan Woodward 
>> wrote:
>>
>>> Just realised that I don’t have an ASF Jenkins account - Uwe or Steve,
>>> can you give me a hand setting up the 7.3 Jenkins jobs?
>>>
>>> Thanks, Alan
>>>
>>>
>>> On 12 Mar 2018, at 09:32, Alan Woodward  wrote:
>>>
>>> I’ve created the 7.3 release branch.  I’ll leave 24 hours for bug-fixes
>>> and doc patches and then create a release candidate.
>>>
>>> We’re now in feature-freeze for 7.3, so please bear in mind the
>>> following:
>>>
>>>- No new features may be committed to the branch.
>>>- Documentation patches, build patches and serious bug fixes may be
>>>committed to the branch. However, you should submit *all* patches
>>>you want to commit to Jira first to give others the chance to review and
>>>possibly vote against the patch. Keep in mind that it is our main 
>>> intention
>>>to keep the branch as stable as possible.
>>>- All patches that are intended for the branch should first be
>>>committed to the unstable branch, merged into the stable branch, and then
>>>into the current release branch.
>>>- Normal unstable and stable branch development may continue as
>>>usual. However, if you plan to commit a big change to the unstable branch
>>>while the branch feature freeze is in effect, think twice: can't the
>>>addition wait a couple more days? Merges of bug fixes into the branch may
>>>become more difficult.
>>>- *Only* Jira issues with Fix version “7.3" and priority "Blocker"
>>>will delay a release candidate build.
>>>
>>>
>>>
>>> On 9 Mar 2018, at 16:43, Alan Woodward  wrote:
>>>
>>> FYI I’m still recovering from my travels, so I’m going to create the
>>> release branch on Monday instead.
>>>
>>> On 27 Feb 2018, at 18:51, Cassandra Targett 
>>> wrote:
>>>
>>> I intend to create the Ref Guide RC as soon as the Lucene/Solr artifacts
>>> RC is ready, so this is a great time to remind folks that if you've got
>>> Ref Guide changes to be done, you've got a couple weeks. If you're stuck or
>>> not sure what to do, let me know & I'm happy to help you out.
>>>
>>> Eventually we'd like to release both the Ref Guide and Lucene/Solr with
>>> the same release process, so this will be a big first test to see how ready
>>> for that we are.
>>>
>>> On Tue, Feb 27, 2018 at 11:42 AM, Michael McCandless <
>>> luc...@mikemccandless.com> wrote:
>>>
 +1

 Mike McCandless

 http://blog.mikemccandless.com

 On Fri, Feb 23, 2018 at 4:50 AM, Alan Woodward <
 alan.woodw...@romseysoftware.co.uk> wrote:

> Hi all,
>
> It’s been a couple of months since the 7.2 release, and we’ve
> accumulated some nice new features since then.  I’d like to volunteer to 
> be
> RM for a 7.3 release.
>
> I’m travelling for the next couple of weeks, so I would plan to create
> the release branch two weeks today, on the 9th March (unless anybody else
> wants to do it sooner, of course :)
>
> - Alan
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

>>>
>>>
>>>
>>>
>>


[JENKINS] Lucene-Solr-Tests-7.x - Build # 510 - Unstable

2018-03-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/510/

1 tests failed.
FAILED:  
org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([26CD22D4A2A7ACB8:A14C9F79A687D6BC]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler(TestMergeSchedulerExternal.java:147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 398 lines...]
   [junit4] Suite: org.apache.lucene.TestMergeSchedulerExternal
   [junit4]   1> TEST FAILED; IW infoStream output:
   [junit4]   1> IFD 0 [2018-03-16T20:27:24.158Z; 
TEST-TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler-seed#[26CD22D4A2A7ACB8]]:
 init: current segments file is "segments"; 
deletionPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy@668b20b6
   [junit4]   1> IFD 0 [2018-03-16T20:27:24.328Z; 
TEST-TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler-seed#[26CD22D4A2A7ACB8]]:
 now checkpoint "" [0 segments ; isCommit = false]
   [junit4]   1> IFD 0 [2018-03-16T20:27:24.328Z; 

[jira] [Commented] (SOLR-12110) Replica which failed to register in Zk can become leader

2018-03-16 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403138#comment-16403138
 ] 

Cao Manh Dat commented on SOLR-12110:
-

Thanks [~shalinmangar]

> Replica which failed to register in Zk can become leader
> 
>
> Key: SOLR-12110
> URL: https://issues.apache.org/jira/browse/SOLR-12110
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Blocker
> Fix For: 7.3
>
> Attachments: SOLR-12110.patch, SOLR-12110.patch
>
>
> In case of when an exception is thrown in ZkController.register() a replica 
> can still be able to joinElection and become leader after that. This will 
> cause many problems, one of the problems can happen (the patch including a 
> test which lead to this failure) is
> A replica with DOWN state can become a leader and the shard will be stuck in 
> this state forever until the replica is removed or the node contains the 
> replica is restarted.
> This won't be a problem in Solr 7.2.1 because a replica with last published 
> state = DOWN can't become a leader, only since SOLR-7034 get resolved (by 
> SOLR-12011)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12110) Replica which failed to register in Zk can become leader

2018-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403137#comment-16403137
 ] 

ASF subversion and git services commented on SOLR-12110:


Commit 8a3742d2ee342ee60a6ed822e36fbdf66e0b5b97 in lucene-solr's branch 
refs/heads/branch_7_3 from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8a3742d ]

SOLR-12110: Replica which failed to register in Zk can become leader


> Replica which failed to register in Zk can become leader
> 
>
> Key: SOLR-12110
> URL: https://issues.apache.org/jira/browse/SOLR-12110
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Blocker
> Fix For: 7.3
>
> Attachments: SOLR-12110.patch, SOLR-12110.patch
>
>
> In case of when an exception is thrown in ZkController.register() a replica 
> can still be able to joinElection and become leader after that. This will 
> cause many problems, one of the problems can happen (the patch including a 
> test which lead to this failure) is
> A replica with DOWN state can become a leader and the shard will be stuck in 
> this state forever until the replica is removed or the node contains the 
> replica is restarted.
> This won't be a problem in Solr 7.2.1 because a replica with last published 
> state = DOWN can't become a leader, only since SOLR-7034 get resolved (by 
> SOLR-12011)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12110) Replica which failed to register in Zk can become leader

2018-03-16 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat resolved SOLR-12110.
-
   Resolution: Fixed
Fix Version/s: 7.3

> Replica which failed to register in Zk can become leader
> 
>
> Key: SOLR-12110
> URL: https://issues.apache.org/jira/browse/SOLR-12110
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Blocker
> Fix For: 7.3
>
> Attachments: SOLR-12110.patch, SOLR-12110.patch
>
>
> In case of when an exception is thrown in ZkController.register() a replica 
> can still be able to joinElection and become leader after that. This will 
> cause many problems, one of the problems can happen (the patch including a 
> test which lead to this failure) is
> A replica with DOWN state can become a leader and the shard will be stuck in 
> this state forever until the replica is removed or the node contains the 
> replica is restarted.
> This won't be a problem in Solr 7.2.1 because a replica with last published 
> state = DOWN can't become a leader, only since SOLR-7034 get resolved (by 
> SOLR-12011)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 498 - Unstable!

2018-03-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/498/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger

Error Message:
Did not expect the listener to fire on first run!

Stack Trace:
java.lang.AssertionError: Did not expect the listener to fire on first run!
at 
__randomizedtesting.SeedInfo.seed([5C8C08CF2684B84A:3F473E4DBF4BCB67]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.lambda$new$0(ScheduledTriggerTest.java:48)
at 
org.apache.solr.cloud.autoscaling.ScheduledTrigger.run(ScheduledTrigger.java:191)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.scheduledTriggerTest(ScheduledTriggerTest.java:102)
at 
org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger(ScheduledTriggerTest.java:75)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Commented] (SOLR-12110) Replica which failed to register in Zk can become leader

2018-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403135#comment-16403135
 ] 

ASF subversion and git services commented on SOLR-12110:


Commit 911fda2efd4d71a604c1815e7e0545bc66986eee in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=911fda2 ]

SOLR-12110: Replica which failed to register in Zk can become leader


> Replica which failed to register in Zk can become leader
> 
>
> Key: SOLR-12110
> URL: https://issues.apache.org/jira/browse/SOLR-12110
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Blocker
> Attachments: SOLR-12110.patch, SOLR-12110.patch
>
>
> In case of when an exception is thrown in ZkController.register() a replica 
> can still be able to joinElection and become leader after that. This will 
> cause many problems, one of the problems can happen (the patch including a 
> test which lead to this failure) is
> A replica with DOWN state can become a leader and the shard will be stuck in 
> this state forever until the replica is removed or the node contains the 
> replica is restarted.
> This won't be a problem in Solr 7.2.1 because a replica with last published 
> state = DOWN can't become a leader, only since SOLR-7034 get resolved (by 
> SOLR-12011)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12110) Replica which failed to register in Zk can become leader

2018-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403134#comment-16403134
 ] 

ASF subversion and git services commented on SOLR-12110:


Commit be8dca3c7bc064bc42662cb3fa6eb7439ffc7fdb in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=be8dca3 ]

SOLR-12110: Replica which failed to register in Zk can become leader


> Replica which failed to register in Zk can become leader
> 
>
> Key: SOLR-12110
> URL: https://issues.apache.org/jira/browse/SOLR-12110
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Blocker
> Attachments: SOLR-12110.patch, SOLR-12110.patch
>
>
> In case of when an exception is thrown in ZkController.register() a replica 
> can still be able to joinElection and become leader after that. This will 
> cause many problems, one of the problems can happen (the patch including a 
> test which lead to this failure) is
> A replica with DOWN state can become a leader and the shard will be stuck in 
> this state forever until the replica is removed or the node contains the 
> replica is restarted.
> This won't be a problem in Solr 7.2.1 because a replica with last published 
> state = DOWN can't become a leader, only since SOLR-7034 get resolved (by 
> SOLR-12011)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11731) LatLonPointSpatialField could be improved to return the lat/lon from docValues

2018-03-16 Thread Karthik Ramachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403114#comment-16403114
 ] 

Karthik Ramachandran commented on SOLR-11731:
-

Looks good to me.

[~dsmiley] Thanks for making changes to test.

> LatLonPointSpatialField could be improved to return the lat/lon from docValues
> --
>
> Key: SOLR-11731
> URL: https://issues.apache.org/jira/browse/SOLR-11731
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-11731.patch, SOLR-11731.patch, SOLR-11731.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> You can only return the lat & lon from a LatLonPointSpatialField if you set 
> stored=true.  But we could allow this (albeit at a small loss in precision) 
> if stored=false and docValues=true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11861) ConfigSets CREATE baseConfigSet param should default to _default

2018-03-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403115#comment-16403115
 ] 

Amrit Sarkar commented on SOLR-11861:
-

[~dsmiley], attached small patch for the improvement here with relevant tests 
validating the same. Feedback will be deeply appreciated.

> ConfigSets CREATE baseConfigSet param should default to _default
> 
>
> Key: SOLR-11861
> URL: https://issues.apache.org/jira/browse/SOLR-11861
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11861.patch
>
>
> It would be nice if I didn't have to specify the baseConfigSet param now that 
> we have a default configSet "_default".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11861) ConfigSets CREATE baseConfigSet param should default to _default

2018-03-16 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11861:

Attachment: SOLR-11861.patch

> ConfigSets CREATE baseConfigSet param should default to _default
> 
>
> Key: SOLR-11861
> URL: https://issues.apache.org/jira/browse/SOLR-11861
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11861.patch
>
>
> It would be nice if I didn't have to specify the baseConfigSet param now that 
> we have a default configSet "_default".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12059) Unable to rename solr.xml

2018-03-16 Thread Edwin Yeo Zheng Lin (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403093#comment-16403093
 ] 

Edwin Yeo Zheng Lin commented on SOLR-12059:


Just to check, is the solr.xml only hard coded in the SolrXmlConfig.class 
source code?

> Unable to rename solr.xml
> -
>
> Key: SOLR-12059
> URL: https://issues.apache.org/jira/browse/SOLR-12059
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5.1
> Environment: Renaming of solr,xml in the $SOLR_HOME directory
>Reporter: Edwin Yeo Zheng Lin
>Priority: Major
>
> I am able to rename the flie names like solrconfig.xml and solr.log to custom 
> names like myconfig.xml and my.log quite seamlessly. 
> However, I am not able to rename the same for solr.xml. Understand that the 
> solr.xml is hard-coded at the SolrXmlConfig.java. Meaning it requires a 
> re-compile of the Jar file in order to rename it.
> Since we can rename files like solrconfig.xml from the properties files, so 
> we should do the same for solr.xml?
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12118) use ivy-versions.properties values as attributes in ref-guide files to replace hard coded version numbers

2018-03-16 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403073#comment-16403073
 ] 

Hoss Man commented on SOLR-12118:
-

Attached patch adds 7 new attributes to our ref-guide build for the 7 diff 
libraries i could find whose version is explicitly mentioned in some way in the 
docs...
 * commons-codec
 * dropwizard
 * log4j
 * opennlp
 * tika
 * velocity
 * zookeeper

In most cases, these mentions were in URLs linking to documentation – but that 
may be a Streetlight Effect since searching the source files for URLs that 
looked like they had version numbers was easy – there are probably other third 
party version mentions in "plain text" that i have overlooked.

Fortunately adding more variables like this is easy.

> use ivy-versions.properties values as attributes in ref-guide files to 
> replace hard coded version numbers
> -
>
> Key: SOLR-12118
> URL: https://issues.apache.org/jira/browse/SOLR-12118
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-12118.patch
>
>
> There's currently a bunch of places in the ref guide where we mention third 
> party libraries and refer to hard coded version numbers - many of which are 
> not consistent with the versions of those libraries actually in use because 
> it's easy to overlook them.
> We should improve the ref-guide build files to pull in the 
> {{ivy-version.properties}} variables to use as attributes in the source files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11331) Ability to Debug Solr With Eclipse IDE

2018-03-16 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403068#comment-16403068
 ] 

Uwe Schindler commented on SOLR-11331:
--

[~varunthacker]: Should I commit this and push?

> Ability to Debug Solr With Eclipse IDE
> --
>
> Key: SOLR-11331
> URL: https://issues.apache.org/jira/browse/SOLR-11331
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.6.2
>Reporter: Karthik Ramachandran
>Assignee: Varun Thacker
>Priority: Minor
>  Labels: eclipse
> Attachments: SOLR-11331.diff, SOLR-11331.patch, SOLR-11331.patch, 
> SOLR-11331.patch, SOLR-11331.patch, SOLR-11331.patch, UI.png
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Ability to Debug Solr With Eclipse IDE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12118) use ivy-versions.properties values as attributes in ref-guide files to replace hard coded version numbers

2018-03-16 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-12118:

Attachment: SOLR-12118.patch

> use ivy-versions.properties values as attributes in ref-guide files to 
> replace hard coded version numbers
> -
>
> Key: SOLR-12118
> URL: https://issues.apache.org/jira/browse/SOLR-12118
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-12118.patch
>
>
> There's currently a bunch of places in the ref guide where we mention third 
> party libraries and refer to hard coded version numbers - many of which are 
> not consistent with the versions of those libraries actually in use because 
> it's easy to overlook them.
> We should improve the ref-guide build files to pull in the 
> {{ivy-version.properties}} variables to use as attributes in the source files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12118) use ivy-versions.properties values as attributes in ref-guide files to replace hard coded version numbers

2018-03-16 Thread Hoss Man (JIRA)
Hoss Man created SOLR-12118:
---

 Summary: use ivy-versions.properties values as attributes in 
ref-guide files to replace hard coded version numbers
 Key: SOLR-12118
 URL: https://issues.apache.org/jira/browse/SOLR-12118
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man
Assignee: Hoss Man


There's currently a bunch of places in the ref guide where we mention third 
party libraries and refer to hard coded version numbers - many of which are not 
consistent with the versions of those libraries actually in use because it's 
easy to overlook them.

We should improve the ref-guide build files to pull in the 
{{ivy-version.properties}} variables to use as attributes in the source files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8203) Windows failures when removing test directories

2018-03-16 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16403064#comment-16403064
 ] 

Uwe Schindler commented on LUCENE-8203:
---

Yeak, looks like that. Sorry for the test noise! Now that Solr tests are not 
failing all the time, its much easier to see problems like that. I wa sno 
longer looking at the Windows faults, because I was seeing too many failures 
and lost track of single test failures.

> Windows failures when removing test directories
> ---
>
> Key: LUCENE-8203
> URL: https://issues.apache.org/jira/browse/LUCENE-8203
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: image-2018-03-13-19-15-51-149.png
>
>
> I was looking at Lucene failures of Policeman Jenkins' Windows job 
> (https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows) and they all 
> fail  when cleaning up temporary files/dirs used for testing, eg.
> {noformat}
> [junit4] ERROR   0.00s J1 | TestBoolean2 (suite) <<<
>[junit4]> Throwable #1: java.io.IOException: Could not remove the 
> following files (in the order of attempts):
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_B7B1F66EB9785AE1-001\tempDir-001:
>  java.nio.file.AccessDeniedException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_B7B1F66EB9785AE1-001\tempDir-001
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_B7B1F66EB9785AE1-001:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.search.TestBoolean2_B7B1F66EB9785AE1-001
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([B7B1F66EB9785AE1]:0)
>[junit4]>  at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> Has anyone ideas what the problem is? At first sight it looks:
>  - not due to unclosed index inputs or MockDirectoryWrapper would barf too
>  - not related to the unmap hack since we have failures on tests that do not 
> use MmapDirectory at all like TestNIOFSDirectory
>  - not due to the fact that we do not release resources in a try/finally or 
> try-with-resources block or junit would report the exception that prevented 
> the dir/input from being closed as well
> It's also surprising how it sometimes fails with a DirectoryNotEmptyException 
> without reporting issues about deleting inner files of the directory.
> I don't have much background on this issue so I could easily have missed 
> something.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 978 - Still Failing

2018-03-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/978/

No tests ran.

Build Log:
[...truncated 30082 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 230 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (20.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 30.3 MB in 0.04 sec (674.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 73.4 MB in 0.11 sec (682.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 83.8 MB in 0.12 sec (675.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6253 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6253 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6253 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6253 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 212 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 9 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] test demo with 9...
   [smoker]   got 212 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (41.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 53.7 MB in 0.52 sec (104.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 154.6 MB in 1.14 sec (135.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 155.6 MB in 1.18 sec (131.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 

[jira] [Comment Edited] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-16 Thread Jerry Bao (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402548#comment-16402548
 ] 

Jerry Bao edited comment on SOLR-12087 at 3/16/18 10:28 PM:


Adding some more potentially relevant information:

We're constantly updating Solr collections via live streaming updates. I 
noticed that moving shards that don't have live indexing is much easier than 
those that do. Also heavy indexing seems to be a factor in whether or not 
zombie shards exist.

EDIT: It seems that collections with indexing consistently have zombie shards 
vs those that dont.


was (Author: jerry.bao):
Adding some more potentially relevant information:

We're constantly updating Solr collections via live streaming updates. I 
noticed that moving shards that don't have live indexing is much easier than 
those that do. Also heavy indexing seems to be a factor in whether or not 
zombie shards exist.

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11891) DocsStreamer populates SolrDocument w/unnecessary fields

2018-03-16 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11891?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-11891:
---

Assignee: Hoss Man

> DocsStreamer populates SolrDocument w/unnecessary fields
> 
>
> Key: SOLR-11891
> URL: https://issues.apache.org/jira/browse/SOLR-11891
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 5.4, 6.4.2, 6.6.2
>Reporter: wei wang
>Assignee: Hoss Man
>Priority: Major
> Attachments: DocsStreamer.java.diff, SOLR-11891.patch, 
> SOLR-11891.patch.BAD
>
>
> We observe that solr query time increases significantly with the number of 
> rows requested,  even all we retrieve for each document is just fl=id,score.  
> Debugged a bit and see that most of the increased time was spent in 
> BinaryResponseWriter,  converting lucene document into SolrDocument.  Inside 
> convertLuceneDocToSolrDoc():   
> [https://github.com/apache/lucene-solr/blob/df874432b9a17b547acb24a01d3491839e6a6b69/solr/core/src/java/org/apache/solr/response/DocsStreamer.java#L182]
>  
> I am a bit puzzled why we need to iterate through all the fields in the 
> document. Why can’t we just iterate through the requested field list?    
> [https://github.com/apache/lucene-solr/blob/df874432b9a17b547acb24a01d3491839e6a6b69/solr/core/src/java/org/apache/solr/response/DocsStreamer.java#L156]
>  
> e.g. when pass in the field list as 
> sdoc = convertLuceneDocToSolrDoc(doc, rctx.getSearcher().getSchema(), fnames)
> and just iterate through fnames,  there is a significant performance boost in 
> our case.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-16 Thread Jerry Bao (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402548#comment-16402548
 ] 

Jerry Bao edited comment on SOLR-12087 at 3/16/18 10:28 PM:


Adding some more potentially relevant information:

We're constantly updating Solr collections via live streaming updates. I 
noticed that moving shards that don't have live indexing is much easier than 
those that do. Also heavy indexing seems to be a factor in whether or not 
zombie shards exist.

EDIT: It seems that collections with indexing/querying consistently have zombie 
shards vs those that dont.


was (Author: jerry.bao):
Adding some more potentially relevant information:

We're constantly updating Solr collections via live streaming updates. I 
noticed that moving shards that don't have live indexing is much easier than 
those that do. Also heavy indexing seems to be a factor in whether or not 
zombie shards exist.

EDIT: It seems that collections with indexing consistently have zombie shards 
vs those that dont.

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-16 Thread Jerry Bao (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402548#comment-16402548
 ] 

Jerry Bao edited comment on SOLR-12087 at 3/16/18 10:14 PM:


Adding some more potentially relevant information:

We're constantly updating Solr collections via live streaming updates. I 
noticed that moving shards that don't have live indexing is much easier than 
those that do. Also heavy indexing seems to be a factor in whether or not 
zombie shards exist.


was (Author: jerry.bao):
I've updated the description with more information.

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12112) NPE in QueryComponent

2018-03-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402569#comment-16402569
 ] 

Amrit Sarkar edited comment on SOLR-12112 at 3/16/18 9:32 PM:
--

[~markus17], is this fixed in 7.3? thanks in advance.


was (Author: sarkaramr...@gmail.com):
[~markus17], is this fixed in 7.3?

> NPE in QueryComponent
> -
>
> Key: SOLR-12112
> URL: https://issues.apache.org/jira/browse/SOLR-12112
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Markus Jelsma
>Priority: Major
> Fix For: 7.3, master (8.0)
>
>
> http://localhost:8983/solr/ss/select?q=*=/select2
> causes:
> {code}
> 2018-03-16 14:46:59.153 ERROR (qtp1929600551-19) [c:search s:shard2 
> r:core_node4 x:search_shard2_replica_n2] o.a.s.s.HttpSolrCall 
> null:java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.QueryComponent.unmarshalSortValues(QueryComponent.java:1037)
> at 
> org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:885)
> at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:585)
> at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:564)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:423)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
> {code}
> using config
> {code}
>   
> 
>   explicit
>   10
> 
>   
>   
>   
> 
>   score desc,id asc
>   none
> 
>   
> {code}
> The sort param in /select2 is the culprit here. Remove it and all goes well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12112) NPE in QueryComponent

2018-03-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402569#comment-16402569
 ] 

Amrit Sarkar commented on SOLR-12112:
-

[~markus17], is this fixed in 7.3?

> NPE in QueryComponent
> -
>
> Key: SOLR-12112
> URL: https://issues.apache.org/jira/browse/SOLR-12112
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Markus Jelsma
>Priority: Major
> Fix For: 7.3, master (8.0)
>
>
> http://localhost:8983/solr/ss/select?q=*=/select2
> causes:
> {code}
> 2018-03-16 14:46:59.153 ERROR (qtp1929600551-19) [c:search s:shard2 
> r:core_node4 x:search_shard2_replica_n2] o.a.s.s.HttpSolrCall 
> null:java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.QueryComponent.unmarshalSortValues(QueryComponent.java:1037)
> at 
> org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:885)
> at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:585)
> at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:564)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:423)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
> {code}
> using config
> {code}
>   
> 
>   explicit
>   10
> 
>   
>   
>   
> 
>   score desc,id asc
>   none
> 
>   
> {code}
> The sort param in /select2 is the culprit here. Remove it and all goes well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12117) Autoscaling suggestions are too few or non existent for clear violations

2018-03-16 Thread Jerry Bao (JIRA)
Jerry Bao created SOLR-12117:


 Summary: Autoscaling suggestions are too few or non existent for 
clear violations
 Key: SOLR-12117
 URL: https://issues.apache.org/jira/browse/SOLR-12117
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Reporter: Jerry Bao
 Attachments: autoscaling.json, diagnostics.json, solr_instances, 
suggestions.json

Attaching suggestions, diagnostics, autoscaling settings, and the 
solr_instances AZ's. One of the operations suggested is impossible:
{code:java}
{"type": "violation","violation": {"node": 
"solr-0a7207d791bd08d4e:8983_solr","tagKey": "null","violation": {"node": 
"4","delta": 1},"clause": {"cores": "<4","node": "#ANY"}},"operation": 
{"method": "POST","path": "/c/r_posts","command": {"move-replica": 
{"targetNode": "solr-0f0e86f34298f7e79:8983_solr","inPlaceMove": 
"true","replica": "2151000"}}}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12117) Autoscaling suggestions are too few or non existent for clear violations

2018-03-16 Thread Jerry Bao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Bao updated SOLR-12117:
-
Description: Attaching suggestions, diagnostics, autoscaling settings, and 
the solr_instances AZ's. Some of the suggestions are one too many for one 
violation, and other suggestions do not appear even though there are clear 
violations in the policy and easily fixable.  (was: Attaching suggestions, 
diagnostics, autoscaling settings, and the solr_instances AZ's. One of the 
operations suggested is impossible:
{code:java}
{"type": "violation","violation": {"node": 
"solr-0a7207d791bd08d4e:8983_solr","tagKey": "null","violation": {"node": 
"4","delta": 1},"clause": {"cores": "<4","node": "#ANY"}},"operation": 
{"method": "POST","path": "/c/r_posts","command": {"move-replica": 
{"targetNode": "solr-0f0e86f34298f7e79:8983_solr","inPlaceMove": 
"true","replica": "2151000"}}}{code})

> Autoscaling suggestions are too few or non existent for clear violations
> 
>
> Key: SOLR-12117
> URL: https://issues.apache.org/jira/browse/SOLR-12117
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: autoscaling.json, diagnostics.json, solr_instances, 
> suggestions.json
>
>
> Attaching suggestions, diagnostics, autoscaling settings, and the 
> solr_instances AZ's. Some of the suggestions are one too many for one 
> violation, and other suggestions do not appear even though there are clear 
> violations in the policy and easily fixable.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11836) Use -1 in bucketSizeLimit to get all facets, analogous to the JSON facet API

2018-03-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402564#comment-16402564
 ] 

Amrit Sarkar commented on SOLR-11836:
-

[~joel.bernstein] will the fix / workaround be part of Solr 7.3? thank you in 
advance.

> Use -1 in bucketSizeLimit to get all facets, analogous to the JSON facet API
> 
>
> Key: SOLR-11836
> URL: https://issues.apache.org/jira/browse/SOLR-11836
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.2
>Reporter: Alfonso Muñoz-Pomer Fuentes
>Priority: Major
>  Labels: facet, streaming
> Attachments: SOLR-11836.patch
>
>
> Currently, to retrieve all buckets using the streaming expressions facet 
> function, the {{bucketSizeLimit}} parameter must have a high enough value so 
> that all results will be included. Compare this with the JSON facet API, 
> where you can use {{"limit": -1}} to achieve this. It would help if such a 
> possibility existed.
> [Issue 11236|https://issues.apache.org/jira/browse/SOLR-11236] is related.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr

2018-03-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402566#comment-16402566
 ] 

Amrit Sarkar commented on SOLR-9272:


[~janhoy], eagerly waiting for your feedback and review, thanks.

> Auto resolve zkHost for bin/solr zk for running Solr
> 
>
> Key: SOLR-9272
> URL: https://issues.apache.org/jira/browse/SOLR-9272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, 
> SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch
>
>
> Spinoff from SOLR-9194:
> We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already 
> running. We can optionally accept the {{-p}} parameter instead, and with that 
> use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's 
> easier to remember solr port than zk string.
> Example:
> {noformat}
> bin/solr start -c -p 9090
> bin/solr zk ls / -p 9090
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12116) Autoscaling suggests to move a replica that does not exist (all numbers)

2018-03-16 Thread Jerry Bao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Bao updated SOLR-12116:
-
Attachment: solr_instances
autoscaling.json
diagnostics.json
suggestions.json

> Autoscaling suggests to move a replica that does not exist (all numbers)
> 
>
> Key: SOLR-12116
> URL: https://issues.apache.org/jira/browse/SOLR-12116
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: autoscaling.json, diagnostics.json, solr_instances, 
> suggestions.json
>
>
> Attaching suggestions, diagnostics, autoscaling settings, and the 
> solr_instances AZ's. One of the operations suggested is impossible:
> {code:java}
> {"type": "violation","violation": {"node": 
> "solr-0a7207d791bd08d4e:8983_solr","tagKey": "null","violation": {"node": 
> "4","delta": 1},"clause": {"cores": "<4","node": "#ANY"}},"operation": 
> {"method": "POST","path": "/c/r_posts","command": {"move-replica": 
> {"targetNode": "solr-0f0e86f34298f7e79:8983_solr","inPlaceMove": 
> "true","replica": "2151000"}}}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12116) Autoscaling suggests to move a replica that does not exist (all numbers)

2018-03-16 Thread Jerry Bao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Bao updated SOLR-12116:
-
Priority: Critical  (was: Major)

> Autoscaling suggests to move a replica that does not exist (all numbers)
> 
>
> Key: SOLR-12116
> URL: https://issues.apache.org/jira/browse/SOLR-12116
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Jerry Bao
>Priority: Critical
>
> Attaching suggestions, diagnostics, autoscaling settings, and the 
> solr_instances AZ's. One of the operations suggested is impossible:
> {code:java}
> {"type": "violation","violation": {"node": 
> "solr-0a7207d791bd08d4e:8983_solr","tagKey": "null","violation": {"node": 
> "4","delta": 1},"clause": {"cores": "<4","node": "#ANY"}},"operation": 
> {"method": "POST","path": "/c/r_posts","command": {"move-replica": 
> {"targetNode": "solr-0f0e86f34298f7e79:8983_solr","inPlaceMove": 
> "true","replica": "2151000"}}}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12116) Autoscaling suggests to move a replica that does not exist (all numbers)

2018-03-16 Thread Jerry Bao (JIRA)
Jerry Bao created SOLR-12116:


 Summary: Autoscaling suggests to move a replica that does not 
exist (all numbers)
 Key: SOLR-12116
 URL: https://issues.apache.org/jira/browse/SOLR-12116
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Reporter: Jerry Bao


Attaching suggestions, diagnostics, autoscaling settings, and the 
solr_instances AZ's. One of the operations suggested is impossible:
{code:java}
{"type": "violation","violation": {"node": 
"solr-0a7207d791bd08d4e:8983_solr","tagKey": "null","violation": {"node": 
"4","delta": 1},"clause": {"cores": "<4","node": "#ANY"}},"operation": 
{"method": "POST","path": "/c/r_posts","command": {"move-replica": 
{"targetNode": "solr-0f0e86f34298f7e79:8983_solr","inPlaceMove": 
"true","replica": "2151000"}}}{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11601) geodist fails for some fields when field is in parenthesis instead of sfield param

2018-03-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402558#comment-16402558
 ] 

Amrit Sarkar commented on SOLR-11601:
-

Tests attached. The actual error: 
{{"sort param could not be parsed as a query, and is not a field that exists in 
the index: geodist(b4_location__geo_si,47.36667,8.55)"}} 

is coming from {{SortSpecParsing}} and I would not like to make any changes 
there as other components are dependent. Tests validates the same error being 
received and solr logs will point out what needs to be done.

[~dsmiley] really appreciate if you review.

> geodist fails for some fields when field is in parenthesis instead of sfield 
> param
> --
>
> Key: SOLR-11601
> URL: https://issues.apache.org/jira/browse/SOLR-11601
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.6
>Reporter: Clemens Wyss
>Priority: Minor
> Attachments: SOLR-11601.patch, SOLR-11601.patch
>
>
> Im switching my schemas from derprecated solr.LatLonType to 
> solr.LatLonPointSpatialField.
> Now my sortquery (which used to work with solr.LatLonType):
> *sort=geodist(b4_location__geo_si,47.36667,8.55) asc*
> raises the error
> {color:red}*"sort param could not be parsed as a query, and is not a field 
> that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color}
> Invoking sort using syntax 
> {color:#14892c}sfield=b4_location__geo_si=47.36667,8.55=geodist() asc
> works as expected though...{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-16 Thread Jerry Bao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Bao updated SOLR-12087:
-
Description: 
Sometimes when deleting replicas, the replica fails to be removed from the 
cluster state. This occurs especially when deleting replicas en mass; the 
resulting cause is that the data is deleted but the replicas aren't removed 
from the cluster state. Attempting to delete the downed replicas causes 
failures because the core does not exist anymore.

This also occurs when trying to move replicas, since that move is an add and 
delete.

Some more information regarding this issue; when the MOVEREPLICA command is 
issued, the new replica is created successfully but the replica to be deleted 
fails to be removed from state.json (the core is deleted though) and we see two 
logs spammed.
 # The node containing the leader replica continually (read every second) 
attempts to initiate recovery on the replica and fails to do so because the 
core does not exist. As a result it continually publishes a down state for the 
replica to zookeeper.
 # The deleted replica node spams that it cannot locate the core because it's 
been deleted.

During this period of time, we see an increase in ZK network connectivity 
overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
shard until its removed from the state)

My guess is there's two issues at hand here:
 # The leader continually attempts to recover a downed replica that is 
unrecoverable because the core does not exist.
 # The replica to be deleted is having trouble being deleted from state.json in 
ZK.

This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.

  was:
Sometimes when deleting replicas, the replica fails to be removed from the 
cluster state. This occurs especially when deleting replicas en mass; the 
resulting cause is that the data is deleted but the replicas aren't removed 
from the cluster state. Attempting to delete the downed replicas causes 
failures because the core does not exist anymore.

This also occurs when trying to move replicas, since that move is an add and 
delete.

Some more information regarding this issue; when the MOVEREPLICA command is 
issued, the new replica is created successfully but the replica to be deleted 
fails to be removed from state.json (the core is deleted though) and we see two 
logs spammed.
 # The node containing the leader replica continually (read every second) 
attempts to initiate recovery on the replica and fails to do so because the 
core does not exist. As a result it continually publishes a down state for the 
replica to zookeeper.
 # The replica node spams that it cannot locate the core because it's been 
deleted.

During this period of time, we see an increase in ZK network connectivity 
overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
shard until its removed from the state)

My guess is there's two issues at hand here:
 # The leader continually attempts to recover a downed replica that is 
unrecoverable because the core does not exist.
 # The replica to be deleted is having trouble being deleted from state.json in 
ZK.

This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.


> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been 

[jira] [Updated] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-16 Thread Jerry Bao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Bao updated SOLR-12087:
-
Description: 
Sometimes when deleting replicas, the replica fails to be removed from the 
cluster state. This occurs especially when deleting replicas en mass; the 
resulting cause is that the data is deleted but the replicas aren't removed 
from the cluster state. Attempting to delete the downed replicas causes 
failures because the core does not exist anymore.

This also occurs when trying to move replicas, since that move is an add and 
delete.

Some more information regarding this issue; when the MOVEREPLICA command is 
issued, the new replica is created successfully but the replica to be deleted 
fails to be removed from state.json (the core is deleted though) and we see two 
logs spammed.
 # The node containing the leader replica continually (read every second) 
attempts to initiate recovery on the replica and fails to do so because the 
core does not exist. As a result it continually publishes a down state for the 
replica to zookeeper.
 # The replica node spams that it cannot locate the core because it's been 
deleted.

During this period of time, we see an increase in ZK network connectivity 
overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
shard until its removed from the state)

My guess is there's two issues at hand here:
 # The leader continually attempts to recover a downed replica that is 
unrecoverable because the core does not exist.
 # The replica to be deleted is having trouble being deleted from state.json in 
ZK.

This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.

  was:
Sometimes when deleting replicas, the replica fails to be removed from the 
cluster state. This occurs especially when deleting replicas en mass; the 
resulting cause is that the data is deleted but the replicas aren't removed 
from the cluster state. Attempting to delete the downed replicas causes 
failures because the core does not exist anymore.

This also occurs when trying to move replicas, since that move is an add and 
delete.

Some more information regarding this issue; when the MOVEREPLICA command is 
issued, the new replica is created successfully but the replica to be deleted 
fails to be removed from state.json (the core is deleted though) and we see two 
logs spammed.
 # The node containing the leader replica continually attempts to initiate 
recovery on the replica and fails to do so because the core does not exist. As 
a result it continually publishes a down state for the replica to zookeeper.
 # The replica node spams that it cannot locate the core because it's been 
deleted.

During this period of time, we see an increase in ZK network connectivity 
overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
shard until its removed from the state)

My guess is there's two issues at hand here:
 # The leader continually attempts to recover a downed replica that is 
unrecoverable because the core does not exist.
 # The replica to be deleted is having trouble being deleted from state.json in 
ZK.

This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.


> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The replica node spams that it cannot locate the core because it's been 
> deleted.
> During this period of time, we 

[jira] [Commented] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-16 Thread Jerry Bao (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402548#comment-16402548
 ] 

Jerry Bao commented on SOLR-12087:
--

I've updated the description with more information.

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually attempts to initiate 
> recovery on the replica and fails to do so because the core does not exist. 
> As a result it continually publishes a down state for the replica to 
> zookeeper.
>  # The replica node spams that it cannot locate the core because it's been 
> deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-16 Thread Jerry Bao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Bao updated SOLR-12087:
-
Priority: Critical  (was: Major)

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Critical
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually attempts to initiate 
> recovery on the replica and fails to do so because the core does not exist. 
> As a result it continually publishes a down state for the replica to 
> zookeeper.
>  # The replica node spams that it cannot locate the core because it's been 
> deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-16 Thread Jerry Bao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Bao updated SOLR-12087:
-
Attachment: Screen Shot 2018-03-16 at 11.50.32 AM.png

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Major
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually attempts to initiate 
> recovery on the replica and fails to do so because the core does not exist. 
> As a result it continually publishes a down state for the replica to 
> zookeeper.
>  # The replica node spams that it cannot locate the core because it's been 
> deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-16 Thread Jerry Bao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Bao updated SOLR-12087:
-
Description: 
Sometimes when deleting replicas, the replica fails to be removed from the 
cluster state. This occurs especially when deleting replicas en mass; the 
resulting cause is that the data is deleted but the replicas aren't removed 
from the cluster state. Attempting to delete the downed replicas causes 
failures because the core does not exist anymore.

This also occurs when trying to move replicas, since that move is an add and 
delete.

Some more information regarding this issue; when the MOVEREPLICA command is 
issued, the new replica is created successfully but the replica to be deleted 
fails to be removed from state.json (the core is deleted though) and we see two 
logs spammed.
 # The node containing the leader replica continually attempts to initiate 
recovery on the replica and fails to do so because the core does not exist. As 
a result it continually publishes a down state for the replica to zookeeper.
 # The replica node spams that it cannot locate the core because it's been 
deleted.

During this period of time, we see an increase in ZK network connectivity 
overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
shard until its removed from the state)

My guess is there's two issues at hand here:
 # The leader continually attempts to recover a downed replica that is 
unrecoverable because the core does not exist.
 # The replica to be deleted is having trouble being deleted from state.json in 
ZK.

This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.

  was:
Sometimes when deleting replicas, the replica fails to be removed from the 
cluster state. This occurs especially when deleting replicas en mass; the 
resulting cause is that the data is deleted but the replicas aren't removed 
from the cluster state. Attempting to delete the downed replicas causes 
failures because the core does not exist anymore.

This also occurs when trying to move replicas, since that move is an add and 
delete.


> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Major
> Attachments: Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually attempts to initiate 
> recovery on the replica and fails to do so because the core does not exist. 
> As a result it continually publishes a down state for the replica to 
> zookeeper.
>  # The replica node spams that it cannot locate the core because it's been 
> deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11601) geodist fails for some fields when field is in parenthesis instead of sfield param

2018-03-16 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11601:

Attachment: SOLR-11601.patch

> geodist fails for some fields when field is in parenthesis instead of sfield 
> param
> --
>
> Key: SOLR-11601
> URL: https://issues.apache.org/jira/browse/SOLR-11601
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.6
>Reporter: Clemens Wyss
>Priority: Minor
> Attachments: SOLR-11601.patch, SOLR-11601.patch
>
>
> Im switching my schemas from derprecated solr.LatLonType to 
> solr.LatLonPointSpatialField.
> Now my sortquery (which used to work with solr.LatLonType):
> *sort=geodist(b4_location__geo_si,47.36667,8.55) asc*
> raises the error
> {color:red}*"sort param could not be parsed as a query, and is not a field 
> that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color}
> Invoking sort using syntax 
> {color:#14892c}sfield=b4_location__geo_si=47.36667,8.55=geodist() asc
> works as expected though...{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12063) PeerSync and Leader Election skips delete-by-id and in-place updates when using CDCR

2018-03-16 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402540#comment-16402540
 ] 

Varun Thacker commented on SOLR-12063:
--

Until INFRA-15850 is resolved the user tagged with the commit will not be me 
 

> PeerSync and Leader Election skips delete-by-id and in-place updates when 
> using CDCR
> 
>
> Key: SOLR-12063
> URL: https://issues.apache.org/jira/browse/SOLR-12063
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2, 7.2.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> test-report-PeerSyncTest, test-report-TestStressRecovery
>
>
> CDCR implements it's own UpdateLog (CdcrUpdateLog) . We changed the encoding 
> format in SOLR-11003.
> PeerSync / LeaderElection code and CDCR checkpoint API call are unable to 
> read delete-by-id's and in-place updates if they are present in the 
> transaction log throwing a ClassCastException as a WARN
> Here's a stack trace for the WARN.
> {code}
>   [beaster]   2> 27394 WARN  (qtp97093533-72) [n:127.0.0.1:44658_solr 
> c:cdcr-cluster1 s:shard1 r:core_node3 x:cdcr-cluster1_shard1_replica_n1] 
> o.a.s.u.UpdateLog Unexpected log entry or corrupt log.  Entry=[2, 
> -1594312216007409664, [B@28e6859c, true]
>   [beaster]   2> java.lang.ClassCastException: java.lang.Boolean cannot be 
> cast to [B
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1443)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1340)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1513)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleShardCheckpointAction(CdcrRequestHandler.java:448)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:198)
>   [beaster]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   [beaster]   2>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12063) PeerSync and Leader Election skips delete-by-id and in-place updates when using CDCR

2018-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402539#comment-16402539
 ] 

ASF subversion and git services commented on SOLR-12063:


Commit 43ad71eaa2ffdd6d453e046bb1c2cae7b4504ecf in lucene-solr's branch 
refs/heads/branch_7x from [~varun_saxena]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=43ad71e ]

SOLR-12063: Fix the Jira number mentioned against the changes entry for this fix

(cherry picked from commit 77954fe)


> PeerSync and Leader Election skips delete-by-id and in-place updates when 
> using CDCR
> 
>
> Key: SOLR-12063
> URL: https://issues.apache.org/jira/browse/SOLR-12063
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2, 7.2.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> test-report-PeerSyncTest, test-report-TestStressRecovery
>
>
> CDCR implements it's own UpdateLog (CdcrUpdateLog) . We changed the encoding 
> format in SOLR-11003.
> PeerSync / LeaderElection code and CDCR checkpoint API call are unable to 
> read delete-by-id's and in-place updates if they are present in the 
> transaction log throwing a ClassCastException as a WARN
> Here's a stack trace for the WARN.
> {code}
>   [beaster]   2> 27394 WARN  (qtp97093533-72) [n:127.0.0.1:44658_solr 
> c:cdcr-cluster1 s:shard1 r:core_node3 x:cdcr-cluster1_shard1_replica_n1] 
> o.a.s.u.UpdateLog Unexpected log entry or corrupt log.  Entry=[2, 
> -1594312216007409664, [B@28e6859c, true]
>   [beaster]   2> java.lang.ClassCastException: java.lang.Boolean cannot be 
> cast to [B
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1443)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1340)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1513)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleShardCheckpointAction(CdcrRequestHandler.java:448)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:198)
>   [beaster]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   [beaster]   2>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8196) Add IntervalQuery and IntervalsSource to expose minimum interval semantics across term fields

2018-03-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402538#comment-16402538
 ] 

David Smiley commented on LUCENE-8196:
--

* Nice package javadocs!
 * Maybe you will some day add a means of extracting offsets (e.g. for 
highlighting) or payloads?
 * Just curious, how did you arrive at the conclusion that you needed to 
specialize the PriorityQueue?
 * what if extractTerms took a Consumer instead of a Set? It's easy 
to invoke with a myset::add for the common case when you have a Set, and I've 
seen cases where you might want to provide a filter before storing it wherever.

 

> Add IntervalQuery and IntervalsSource to expose minimum interval semantics 
> across term fields
> -
>
> Key: LUCENE-8196
> URL: https://issues.apache.org/jira/browse/LUCENE-8196
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8196.patch, LUCENE-8196.patch, LUCENE-8196.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> This ticket proposes an alternative implementation of the SpanQuery family 
> that uses minimum-interval semantics from 
> [http://vigna.di.unimi.it/ftp/papers/EfficientAlgorithmsMinimalIntervalSemantics.pdf]
>  to implement positional queries across term-based fields.  Rather than using 
> TermQueries to construct the interval operators, as in LUCENE-2878 or the 
> current Spans implementation, we instead use a new IntervalsSource object, 
> which will produce IntervalIterators over a particular segment and field.  
> These are constructed using various static helper methods, and can then be 
> passed to a new IntervalQuery which will return documents that contain one or 
> more intervals so defined.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12063) PeerSync and Leader Election skips delete-by-id and in-place updates when using CDCR

2018-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402536#comment-16402536
 ] 

ASF subversion and git services commented on SOLR-12063:


Commit 77954fe90a1c1d47ef43f861afb2a6a2e86a3ddd in lucene-solr's branch 
refs/heads/master from [~varun_saxena]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=77954fe ]

SOLR-12063: Fix the Jira number mentioned against the changes entry for this fix


> PeerSync and Leader Election skips delete-by-id and in-place updates when 
> using CDCR
> 
>
> Key: SOLR-12063
> URL: https://issues.apache.org/jira/browse/SOLR-12063
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2, 7.2.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> test-report-PeerSyncTest, test-report-TestStressRecovery
>
>
> CDCR implements it's own UpdateLog (CdcrUpdateLog) . We changed the encoding 
> format in SOLR-11003.
> PeerSync / LeaderElection code and CDCR checkpoint API call are unable to 
> read delete-by-id's and in-place updates if they are present in the 
> transaction log throwing a ClassCastException as a WARN
> Here's a stack trace for the WARN.
> {code}
>   [beaster]   2> 27394 WARN  (qtp97093533-72) [n:127.0.0.1:44658_solr 
> c:cdcr-cluster1 s:shard1 r:core_node3 x:cdcr-cluster1_shard1_replica_n1] 
> o.a.s.u.UpdateLog Unexpected log entry or corrupt log.  Entry=[2, 
> -1594312216007409664, [B@28e6859c, true]
>   [beaster]   2> java.lang.ClassCastException: java.lang.Boolean cannot be 
> cast to [B
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1443)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1340)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1513)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleShardCheckpointAction(CdcrRequestHandler.java:448)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:198)
>   [beaster]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   [beaster]   2>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12063) PeerSync and Leader Election skips delete-by-id and in-place updates when using CDCR

2018-03-16 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402535#comment-16402535
 ] 

Varun Thacker commented on SOLR-12063:
--

Whoops I tagged SOLR-12083 in the commit message . So the commits are tagged 
here:

https://issues.apache.org/jira/browse/SOLR-12083?focusedCommentId=16402531=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16402531

https://issues.apache.org/jira/browse/SOLR-12083?focusedCommentId=16402532=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16402532

I still need to fix the CHANGES entry. Doing that now

> PeerSync and Leader Election skips delete-by-id and in-place updates when 
> using CDCR
> 
>
> Key: SOLR-12063
> URL: https://issues.apache.org/jira/browse/SOLR-12063
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2, 7.2.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> test-report-PeerSyncTest, test-report-TestStressRecovery
>
>
> CDCR implements it's own UpdateLog (CdcrUpdateLog) . We changed the encoding 
> format in SOLR-11003.
> PeerSync / LeaderElection code and CDCR checkpoint API call are unable to 
> read delete-by-id's and in-place updates if they are present in the 
> transaction log throwing a ClassCastException as a WARN
> Here's a stack trace for the WARN.
> {code}
>   [beaster]   2> 27394 WARN  (qtp97093533-72) [n:127.0.0.1:44658_solr 
> c:cdcr-cluster1 s:shard1 r:core_node3 x:cdcr-cluster1_shard1_replica_n1] 
> o.a.s.u.UpdateLog Unexpected log entry or corrupt log.  Entry=[2, 
> -1594312216007409664, [B@28e6859c, true]
>   [beaster]   2> java.lang.ClassCastException: java.lang.Boolean cannot be 
> cast to [B
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1443)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1340)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1513)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleShardCheckpointAction(CdcrRequestHandler.java:448)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:198)
>   [beaster]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   [beaster]   2>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12083) RealTimeGetComponent fails for INPLACE_UPDATE when Cdcr enabled

2018-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402531#comment-16402531
 ] 

ASF subversion and git services commented on SOLR-12083:


Commit c4d0223ad40d36fd908bb0d3b291763425fe69b4 in lucene-solr's branch 
refs/heads/master from [~varun_saxena]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c4d0223 ]

SOLR-12083: Fix PeerSync, Leader Election failures and CDCR checkpoint 
inconsistencies on a cluster running CDCR


> RealTimeGetComponent fails for INPLACE_UPDATE when Cdcr enabled 
> 
>
> Key: SOLR-12083
> URL: https://issues.apache.org/jira/browse/SOLR-12083
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2, 7.2.1, 7.3
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12083-A-within-test-framework.patch, 
> SOLR-12083-B-wo-test-framework.patch, SOLR-12083.patch, SOLR-12083.patch, 
> SOLR-12083.patch, SOLR-12083.patch, SOLR-12083.patch, 
> add_support_for_random_ulog_in_tests.patch
>
>
> When we were adding bi-directional sync support in CDCR ( SOLR-11003 ) we 
> changed the CDCR Update Log codec to write an extra bits. 
> When we use the RealTimeGet component on a cluster running CDCR and have 
> in-place updates in the update log we will falsely trip an assert thus 
> causing the request to fail
> Here's the proposed change
> {code:java}
> - assert entry.size() == 5;
> + if (ulog instanceof CdcrUpdateLog) {
> +   assert entry.size() == 6;
> + }
> + else {
> +   assert entry.size() == 5;
> + }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12083) RealTimeGetComponent fails for INPLACE_UPDATE when Cdcr enabled

2018-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402532#comment-16402532
 ] 

ASF subversion and git services commented on SOLR-12083:


Commit 033afbfaad0fc0b0a48967765cddf9e2b455 in lucene-solr's branch 
refs/heads/branch_7x from [~varun_saxena]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=033afbf ]

SOLR-12083: Fix PeerSync, Leader Election failures and CDCR checkpoint 
inconsistencies on a cluster running CDCR

(cherry picked from commit c4d0223)


> RealTimeGetComponent fails for INPLACE_UPDATE when Cdcr enabled 
> 
>
> Key: SOLR-12083
> URL: https://issues.apache.org/jira/browse/SOLR-12083
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2, 7.2.1, 7.3
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12083-A-within-test-framework.patch, 
> SOLR-12083-B-wo-test-framework.patch, SOLR-12083.patch, SOLR-12083.patch, 
> SOLR-12083.patch, SOLR-12083.patch, SOLR-12083.patch, 
> add_support_for_random_ulog_in_tests.patch
>
>
> When we were adding bi-directional sync support in CDCR ( SOLR-11003 ) we 
> changed the CDCR Update Log codec to write an extra bits. 
> When we use the RealTimeGet component on a cluster running CDCR and have 
> in-place updates in the update log we will falsely trip an assert thus 
> causing the request to fail
> Here's the proposed change
> {code:java}
> - assert entry.size() == 5;
> + if (ulog instanceof CdcrUpdateLog) {
> +   assert entry.size() == 6;
> + }
> + else {
> +   assert entry.size() == 5;
> + }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12063) PeerSync and Leader Election skips delete-by-id and in-place updates when using CDCR

2018-03-16 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402529#comment-16402529
 ] 

Varun Thacker commented on SOLR-12063:
--

I've reviewed the latest patch with Amrit offline today.

After which I ran precommit and tests on the latest patch.  Committing this 
shortly.

Like SOLR-12083 If Jenkins is happy today I'll check with the RM if it's okay 
to backport it to the release branch. If it's too late then i'll move the 
CHANGES to 7.4 on master and branch7x
 

> PeerSync and Leader Election skips delete-by-id and in-place updates when 
> using CDCR
> 
>
> Key: SOLR-12063
> URL: https://issues.apache.org/jira/browse/SOLR-12063
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2, 7.2.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> test-report-PeerSyncTest, test-report-TestStressRecovery
>
>
> CDCR implements it's own UpdateLog (CdcrUpdateLog) . We changed the encoding 
> format in SOLR-11003.
> PeerSync / LeaderElection code and CDCR checkpoint API call are unable to 
> read delete-by-id's and in-place updates if they are present in the 
> transaction log throwing a ClassCastException as a WARN
> Here's a stack trace for the WARN.
> {code}
>   [beaster]   2> 27394 WARN  (qtp97093533-72) [n:127.0.0.1:44658_solr 
> c:cdcr-cluster1 s:shard1 r:core_node3 x:cdcr-cluster1_shard1_replica_n1] 
> o.a.s.u.UpdateLog Unexpected log entry or corrupt log.  Entry=[2, 
> -1594312216007409664, [B@28e6859c, true]
>   [beaster]   2> java.lang.ClassCastException: java.lang.Boolean cannot be 
> cast to [B
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1443)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1340)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1513)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleShardCheckpointAction(CdcrRequestHandler.java:448)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:198)
>   [beaster]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   [beaster]   2>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11882) SolrMetric registries retain references to SolrCores when closed

2018-03-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402474#comment-16402474
 ] 

Erick Erickson commented on SOLR-11882:
---

[~ab] Here's a one-line fix that I don't particularly like but thought I'd add 
to the conversation:

this is in SolrCores, almost at the very end of the file
{{
  @Override
  public void update(Observable o, Object arg) {
SolrCore core = (SolrCore)arg;
// delete metrics specific to this core

container.getMetricManager().removeRegistry(core.getCoreMetricManager().getRegistryName());
 // this is the important bit.

synchronized (modifyLock) {
  pendingCloses.add(core); // Essentially just queue this core up for 
closing.
  modifyLock.notifyAll(); // Wakes up closer thread too
}
  }
}}

_Unloading_ a non-transient core doesn't have the same problem since the line I 
stole is executed when unloading a core. Reloading a core (as you already 
pointed out) replaces the old reference with a new one so that's no problem.

Just closing a transient core is where the problem is, so this code is executed 
when a transient core is on its way to being closed rather than in the close 
code itself.

What I don't like about it is it's rather loosely coupled with the close, by 
that I mean if there's some other code somewhere that closes a core _that_ code 
has to remember to do this too.

Anyway, I'll be happy to test anything else you come up with, it'll take me 10 
minutes or so to see what the effects of any changes you want me to try is, at 
least as far as transient cores goes.

> SolrMetric registries retain references to SolrCores when closed
> 
>
> Key: SOLR-11882
> URL: https://issues.apache.org/jira/browse/SOLR-11882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, Server
>Affects Versions: 7.1
>Reporter: Eros Taborelli
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-11882.patch, SOLR-11882.patch, SOLR-11882.patch, 
> SOLR-11882.patch, create-cores.zip, solr-dump-full_Leak_Suspects.zip, 
> solr.config.zip
>
>
> *Description:*
> Our setup involves using a lot of small cores (possibly hundred thousand), 
> but working only on a few of them at any given time.
> We already followed all recommendations in this guide: 
> [https://wiki.apache.org/solr/LotsOfCores]
> We noticed that after creating/loading around 1000-2000 empty cores, with no 
> documents inside, the heap consumption went through the roof despite having 
> set transientCacheSize to only 64 (heap size set to 12G).
> All cores are correctly set to loadOnStartup=false and transient=true, and we 
> have verified via logs that the cores in excess are actually being closed.
> However, a reference remains in the 
> org.apache.solr.metrics.SolrMetricManager#registries that is never removed 
> until a core if fully unloaded.
> Restarting the JVM loads all cores in the admin UI, but doesn't populate the 
> ConcurrentHashMap until a core is actually fully loaded.
> I reproduced the issue on a smaller scale (transientCacheSize = 5, heap size 
> = 512m) and made a report (attached) using eclipse MAT.
> *Desired outcome:*
> When a transient core is closed, the references in the SolrMetricManager 
> should be removed, in the same fashion the reporters for the core are also 
> closed and removed.
> In alternative, a unloadOnClose=true|false flag could be implemented to fully 
> unload a transient core when closed due to the cache size.
> *Note:*
> The documentation mentions everywhere that the unused cores will be unloaded, 
> but it's misleading as the cores are never fully unloaded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11731) LatLonPointSpatialField could be improved to return the lat/lon from docValues

2018-03-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402457#comment-16402457
 ] 

David Smiley commented on SOLR-11731:
-

Thanks for offering a great starting point.  I worked on it further 
significantly, esp. with testing.
* Improved testing:
** test round-trip more methodically using a RetrievalCombo class/struct to 
hold index & return value
** new testLLPDecodeIsStableAndPrecise to test that the result is stable (can 
be re-indexed to get the same value), and that it's precise (< 1.3 cm)
* Adjusted to BigDecimal setScale(7,CEILING) as comments indicate why.
* Ensured we only wrap in an array when the field is multiValued.
WDYT?

> LatLonPointSpatialField could be improved to return the lat/lon from docValues
> --
>
> Key: SOLR-11731
> URL: https://issues.apache.org/jira/browse/SOLR-11731
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-11731.patch, SOLR-11731.patch, SOLR-11731.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> You can only return the lat & lon from a LatLonPointSpatialField if you set 
> stored=true.  But we could allow this (albeit at a small loss in precision) 
> if stored=false and docValues=true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-NightlyTests-master - Build # 3 - Still Failing

2018-03-16 Thread Apache Jenkins Server
Build: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/3/

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.sim.TestLargeCluster

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 1) Thread[id=39655, 
name=AutoscalingActionExecutor-11954-thread-1, state=RUNNABLE, 
group=TGRP-TestLargeCluster] at 
org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:90) at 
org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:108) at 
org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92) at 
org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:74) at 
org.apache.solr.client.solrj.cloud.autoscaling.Row.copy(Row.java:91) at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.lambda$getMatrixCopy$1(Policy.java:299)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session$$Lambda$564/2101868463.apply(Unknown
 Source) at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)   
  at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)  
   at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481) 
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
 at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)  
   at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234) 
at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)   
  at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getMatrixCopy(Policy.java:300)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.copy(Policy.java:289)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Row.addReplica(Row.java:122) 
at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.tryEachNode(MoveReplicaSuggester.java:59)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.MoveReplicaSuggester.init(MoveReplicaSuggester.java:34)
 at 
org.apache.solr.client.solrj.cloud.autoscaling.Suggester.getSuggestion(Suggester.java:129)
 at 
org.apache.solr.cloud.autoscaling.ComputePlanAction.process(ComputePlanAction.java:98)
 at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers.lambda$null$3(ScheduledTriggers.java:307)
 at 
org.apache.solr.cloud.autoscaling.ScheduledTriggers$$Lambda$460/109072130.run(Unknown
 Source) at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
at java.util.concurrent.FutureTask.run(FutureTask.java:266) at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
 at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$118/1842710995.run(Unknown
 Source) at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.autoscaling.sim.TestLargeCluster: 
   1) Thread[id=39655, name=AutoscalingActionExecutor-11954-thread-1, 
state=RUNNABLE, group=TGRP-TestLargeCluster]
at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:90)
at org.apache.solr.common.util.Utils.makeDeepCopy(Utils.java:108)
at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:92)
at org.apache.solr.common.util.Utils.getDeepCopy(Utils.java:74)
at org.apache.solr.client.solrj.cloud.autoscaling.Row.copy(Row.java:91)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.lambda$getMatrixCopy$1(Policy.java:299)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session$$Lambda$564/2101868463.apply(Unknown
 Source)
at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.getMatrixCopy(Policy.java:300)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy$Session.copy(Policy.java:289)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Row.addReplica(Row.java:122)
at 

[jira] [Updated] (SOLR-11731) LatLonPointSpatialField could be improved to return the lat/lon from docValues

2018-03-16 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-11731:

Attachment: SOLR-11731.patch

> LatLonPointSpatialField could be improved to return the lat/lon from docValues
> --
>
> Key: SOLR-11731
> URL: https://issues.apache.org/jira/browse/SOLR-11731
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-11731.patch, SOLR-11731.patch, SOLR-11731.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> You can only return the lat & lon from a LatLonPointSpatialField if you set 
> stored=true.  But we could allow this (albeit at a small loss in precision) 
> if stored=false and docValues=true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11731) LatLonPointSpatialField could be improved to return the lat/lon from docValues

2018-03-16 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-11731:
---

Assignee: David Smiley

> LatLonPointSpatialField could be improved to return the lat/lon from docValues
> --
>
> Key: SOLR-11731
> URL: https://issues.apache.org/jira/browse/SOLR-11731
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: SOLR-11731.patch, SOLR-11731.patch
>
>  Time Spent: 4h 20m
>  Remaining Estimate: 0h
>
> You can only return the lat & lon from a LatLonPointSpatialField if you set 
> stored=true.  But we could allow this (albeit at a small loss in precision) 
> if stored=false and docValues=true.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11601) geodist fails for some fields when field is in parenthesis instead of sfield param

2018-03-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402417#comment-16402417
 ] 

Amrit Sarkar commented on SOLR-11601:
-

Thanks David, added improved error in patch and uploaded. Need to write tests 
still. 

> geodist fails for some fields when field is in parenthesis instead of sfield 
> param
> --
>
> Key: SOLR-11601
> URL: https://issues.apache.org/jira/browse/SOLR-11601
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.6
>Reporter: Clemens Wyss
>Priority: Minor
> Attachments: SOLR-11601.patch
>
>
> Im switching my schemas from derprecated solr.LatLonType to 
> solr.LatLonPointSpatialField.
> Now my sortquery (which used to work with solr.LatLonType):
> *sort=geodist(b4_location__geo_si,47.36667,8.55) asc*
> raises the error
> {color:red}*"sort param could not be parsed as a query, and is not a field 
> that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color}
> Invoking sort using syntax 
> {color:#14892c}sfield=b4_location__geo_si=47.36667,8.55=geodist() asc
> works as expected though...{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11601) geodist fails for some fields when field is in parenthesis instead of sfield param

2018-03-16 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11601:

Attachment: SOLR-11601.patch

> geodist fails for some fields when field is in parenthesis instead of sfield 
> param
> --
>
> Key: SOLR-11601
> URL: https://issues.apache.org/jira/browse/SOLR-11601
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spatial
>Affects Versions: 6.6
>Reporter: Clemens Wyss
>Priority: Minor
> Attachments: SOLR-11601.patch
>
>
> Im switching my schemas from derprecated solr.LatLonType to 
> solr.LatLonPointSpatialField.
> Now my sortquery (which used to work with solr.LatLonType):
> *sort=geodist(b4_location__geo_si,47.36667,8.55) asc*
> raises the error
> {color:red}*"sort param could not be parsed as a query, and is not a field 
> that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color}
> Invoking sort using syntax 
> {color:#14892c}sfield=b4_location__geo_si=47.36667,8.55=geodist() asc
> works as expected though...{color}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12063) PeerSync and Leader Election skips delete-by-id and in-place updates when using CDCR

2018-03-16 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-12063:

Summary: PeerSync and Leader Election skips delete-by-id and in-place 
updates when using CDCR  (was: PeerSync and Leader Election skips delete-by-id 
and in-place updates when using CDCRPeerSync and Leader Election skips 
delete-by-id and in-place updates when using CDCR)

> PeerSync and Leader Election skips delete-by-id and in-place updates when 
> using CDCR
> 
>
> Key: SOLR-12063
> URL: https://issues.apache.org/jira/browse/SOLR-12063
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2, 7.2.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> test-report-PeerSyncTest, test-report-TestStressRecovery
>
>
> CDCR implements it's own UpdateLog (CdcrUpdateLog) . We changed the encoding 
> format in SOLR-11003.
> PeerSync / LeaderElection code and CDCR checkpoint API call are unable to 
> read delete-by-id's and in-place updates if they are present in the 
> transaction log throwing a ClassCastException as a WARN
> Here's a stack trace for the WARN.
> {code}
>   [beaster]   2> 27394 WARN  (qtp97093533-72) [n:127.0.0.1:44658_solr 
> c:cdcr-cluster1 s:shard1 r:core_node3 x:cdcr-cluster1_shard1_replica_n1] 
> o.a.s.u.UpdateLog Unexpected log entry or corrupt log.  Entry=[2, 
> -1594312216007409664, [B@28e6859c, true]
>   [beaster]   2> java.lang.ClassCastException: java.lang.Boolean cannot be 
> cast to [B
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1443)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1340)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1513)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleShardCheckpointAction(CdcrRequestHandler.java:448)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:198)
>   [beaster]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   [beaster]   2>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12063) PeerSync and Leader Election skips delete-by-id and in-place updates when using CDCRPeerSync and Leader Election skips delete-by-id and in-place updates when using CDCR

2018-03-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402399#comment-16402399
 ] 

Amrit Sarkar commented on SOLR-12063:
-

Here's the final iteration of the patch.
The CDCR checkpoint test has been enhanced to add a delete-by-id and in-place 
updates which will trigger the ClassCastException

Also TestStressRecoveries will now test with the CDCR update log to give more 
test coverage going forward

> PeerSync and Leader Election skips delete-by-id and in-place updates when 
> using CDCRPeerSync and Leader Election skips delete-by-id and in-place 
> updates when using CDCR
> 
>
> Key: SOLR-12063
> URL: https://issues.apache.org/jira/browse/SOLR-12063
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2, 7.2.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> test-report-PeerSyncTest, test-report-TestStressRecovery
>
>
> CDCR implements it's own UpdateLog (CdcrUpdateLog) . We changed the encoding 
> format in SOLR-11003.
> PeerSync / LeaderElection code and CDCR checkpoint API call are unable to 
> read delete-by-id's and in-place updates if they are present in the 
> transaction log throwing a ClassCastException as a WARN
> Here's a stack trace for the WARN.
> {code}
>   [beaster]   2> 27394 WARN  (qtp97093533-72) [n:127.0.0.1:44658_solr 
> c:cdcr-cluster1 s:shard1 r:core_node3 x:cdcr-cluster1_shard1_replica_n1] 
> o.a.s.u.UpdateLog Unexpected log entry or corrupt log.  Entry=[2, 
> -1594312216007409664, [B@28e6859c, true]
>   [beaster]   2> java.lang.ClassCastException: java.lang.Boolean cannot be 
> cast to [B
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1443)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1340)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1513)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleShardCheckpointAction(CdcrRequestHandler.java:448)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:198)
>   [beaster]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   [beaster]   2>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 21651 - Unstable!

2018-03-16 Thread Policeman Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 79, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-12063) PeerSync and Leader Election skips delete-by-id and in-place updates when using CDCRPeerSync and Leader Election skips delete-by-id and in-place updates when using CDCR

2018-03-16 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-12063:

Summary: PeerSync and Leader Election skips delete-by-id and in-place 
updates when using CDCRPeerSync and Leader Election skips delete-by-id and 
in-place updates when using CDCR  (was: Fix tlog entry indexes in UpdateLog for 
CDCR to work smoothly.)

> PeerSync and Leader Election skips delete-by-id and in-place updates when 
> using CDCRPeerSync and Leader Election skips delete-by-id and in-place 
> updates when using CDCR
> 
>
> Key: SOLR-12063
> URL: https://issues.apache.org/jira/browse/SOLR-12063
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2, 7.2.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> SOLR-12063.patch, SOLR-12063.patch, SOLR-12063.patch, 
> test-report-PeerSyncTest, test-report-TestStressRecovery
>
>
> CDCR implements it's own UpdateLog (CdcrUpdateLog) . We changed the encoding 
> format in SOLR-11003.
> PeerSync / LeaderElection code and CDCR checkpoint API call are unable to 
> read delete-by-id's and in-place updates if they are present in the 
> transaction log throwing a ClassCastException as a WARN
> Here's a stack trace for the WARN.
> {code}
>   [beaster]   2> 27394 WARN  (qtp97093533-72) [n:127.0.0.1:44658_solr 
> c:cdcr-cluster1 s:shard1 r:core_node3 x:cdcr-cluster1_shard1_replica_n1] 
> o.a.s.u.UpdateLog Unexpected log entry or corrupt log.  Entry=[2, 
> -1594312216007409664, [B@28e6859c, true]
>   [beaster]   2> java.lang.ClassCastException: java.lang.Boolean cannot be 
> cast to [B
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1443)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1340)
>   [beaster]   2>  at 
> org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1513)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleShardCheckpointAction(CdcrRequestHandler.java:448)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:198)
>   [beaster]   2>  at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
>   [beaster]   2>  at 
> org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
>   [beaster]   2>  at 
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
>   [beaster]   2>  at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
>   [beaster]   2>  at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
>   [beaster]   2>  at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12063) Fix tlog entry indexes in UpdateLog for CDCR to work smoothly.

2018-03-16 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-12063:

Description: 
CDCR implements it's own UpdateLog (CdcrUpdateLog) . We changed the encoding 
format in SOLR-11003.
PeerSync / LeaderElection code and CDCR checkpoint API call are unable to read 
delete-by-id's and in-place updates if they are present in the transaction log 
throwing a ClassCastException as a WARN

Here's a stack trace for the WARN.

{code}
  [beaster]   2> 27394 WARN  (qtp97093533-72) [n:127.0.0.1:44658_solr 
c:cdcr-cluster1 s:shard1 r:core_node3 x:cdcr-cluster1_shard1_replica_n1] 
o.a.s.u.UpdateLog Unexpected log entry or corrupt log.  Entry=[2, 
-1594312216007409664, [B@28e6859c, true]
  [beaster]   2> java.lang.ClassCastException: java.lang.Boolean cannot be cast 
to [B
  [beaster]   2>at 
org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1443)
  [beaster]   2>at 
org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1340)
  [beaster]   2>at 
org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1513)
  [beaster]   2>at 
org.apache.solr.handler.CdcrRequestHandler.handleShardCheckpointAction(CdcrRequestHandler.java:448)
  [beaster]   2>at 
org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:198)
  [beaster]   2>at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
  [beaster]   2>at 
org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
  [beaster]   2>at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
  [beaster]   2>at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
  [beaster]   2>at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:384)
  [beaster]   2>at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:330)
  [beaster]   2>at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
  [beaster]   2>at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  [beaster]   2>at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1637)
  [beaster]   2>at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
  [beaster]   2>at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
  [beaster]   2>at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
  [beaster]   2>at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:188)
  [beaster]   2>at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1253)
  [beaster]   2>at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:168)
  [beaster]   2>at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
  [beaster]   2>at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
{code}

  was:
In *UpdateLog*, {{RecentUpdates}} reads the entry of tlogs, and throughout the 
project the entry indexes for various operations are consistent, but odd in 
this part. As we included new entry in TransactionLog for CDCR, read operations 
in {{update()}} method of {{RecentUpdates}} throw error rightfully as elements 
are read from wrong indexes of tlog entry. The entry indexes of llog should be 
consistent throughout.

{code}
  [beaster]   2> 27394 WARN  (qtp97093533-72) [n:127.0.0.1:44658_solr 
c:cdcr-cluster1 s:shard1 r:core_node3 x:cdcr-cluster1_shard1_replica_n1] 
o.a.s.u.UpdateLog Unexpected log entry or corrupt log.  Entry=[2, 
-1594312216007409664, [B@28e6859c, true]
  [beaster]   2> java.lang.ClassCastException: java.lang.Boolean cannot be cast 
to [B
  [beaster]   2>at 
org.apache.solr.update.UpdateLog$RecentUpdates.update(UpdateLog.java:1443)
  [beaster]   2>at 
org.apache.solr.update.UpdateLog$RecentUpdates.(UpdateLog.java:1340)
  [beaster]   2>at 
org.apache.solr.update.UpdateLog.getRecentUpdates(UpdateLog.java:1513)
  [beaster]   2>at 
org.apache.solr.handler.CdcrRequestHandler.handleShardCheckpointAction(CdcrRequestHandler.java:448)
  [beaster]   2>at 
org.apache.solr.handler.CdcrRequestHandler.handleRequestBody(CdcrRequestHandler.java:198)
  [beaster]   2>at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:195)
  [beaster]   2>at 
org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
  [beaster]   2>at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:711)
  [beaster]   2>at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:517)
  

[jira] [Comment Edited] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-16 Thread Sathiya N Sundararajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402376#comment-16402376
 ] 

Sathiya N Sundararajan edited comment on SOLR-12087 at 3/16/18 7:25 PM:


{code:java}
// 2018-03-14 22:11:29.965 ERROR (qtp959447386-273280) [ ] 
o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Unable to 
locate core subreddits_shard2_replica_n189 at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$5(CoreAdminOperation.java:150)
 at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
 at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
 at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
 at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:735) at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:716) 
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:497) at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) 
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) 
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) 
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) 
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
at org.eclipse.jetty.server.Server.handle(Server.java:534) at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
 at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
at java.lang.Thread.run(Thread.java:748)
{code}
 


was (Author: ausathya):
{code:java}
// code placeholder
{code}
2018-03-14 22:11:29.965 ERROR (qtp959447386-273280) [ ] 
o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Unable to 
locate core subreddits_shard2_replica_n189 at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$5(CoreAdminOperation.java:150)
 at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
 at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
 at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
 at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:735) at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:716) 
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:497) at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
at 

[jira] [Commented] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-16 Thread Sathiya N Sundararajan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402376#comment-16402376
 ] 

Sathiya N Sundararajan commented on SOLR-12087:
---

{code:java}
// code placeholder
{code}
2018-03-14 22:11:29.965 ERROR (qtp959447386-273280) [ ] 
o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: Unable to 
locate core subreddits_shard2_replica_n189 at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$5(CoreAdminOperation.java:150)
 at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:384)
 at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:389)
 at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:174)
 at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
 at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:735) at 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:716) 
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:497) at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
 at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
 at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) 
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) 
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
 at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) 
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
 at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
 at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) 
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
 at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
 at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
at org.eclipse.jetty.server.Server.handle(Server.java:534) at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
 at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) 
at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) 
at java.lang.Thread.run(Thread.java:748)

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Major
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12115) document the various types of domain changes in json faceting

2018-03-16 Thread Hoss Man (JIRA)
Hoss Man created SOLR-12115:
---

 Summary: document the various types of domain changes in json 
faceting
 Key: SOLR-12115
 URL: https://issues.apache.org/jira/browse/SOLR-12115
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man
Assignee: Hoss Man


I added query time join domain changes to json faceting in SOLR-10583 - but 
didn't document it in the ref guide since json faceting iddn't exist in the ref 
guide at all

we now have json faceting in the ref guide, but there isn't really a cohesive 
section explaining domain changes - so it's still not trivial to add this 
feature.

in general we should take responsibility for beefing up the docs on domain 
changes, including the block join domain features (to parents & to children) as 
well as this query domain change i added



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-16 Thread Jerry Bao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Bao updated SOLR-12087:
-
Summary: Deleting replicas sometimes fails and causes the replicas to exist 
in the down state  (was: Deleting shards sometimes fails and causes the shard 
to exist in the down state)

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Major
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12087) Deleting shards sometimes fails and causes the shard to exist in the down state

2018-03-16 Thread Jerry Bao (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry Bao updated SOLR-12087:
-
Description: 
Sometimes when deleting replicas, the replica fails to be removed from the 
cluster state. This occurs especially when deleting replicas en mass; the 
resulting cause is that the data is deleted but the replicas aren't removed 
from the cluster state. Attempting to delete the downed replicas causes 
failures because the core does not exist anymore.

This also occurs when trying to move replicas, since that move is an add and 
delete.

  was:
Sometimes when deleting replicas, the replica fails to be removed from the 
cluster state. This occurs especially when deleting replicas en mass; the 
resulting cause is that the data is deleted but the replicas aren't removed 
from the cluster state. Attempting to delete the downed replicas causes 
failures because the core does not exist anymore.

It seems like when deleting replicas, ZK writes are timing out, preventing the 
cluster state from being properly updated.


> Deleting shards sometimes fails and causes the shard to exist in the down 
> state
> ---
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Priority: Major
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11022) SynonymGraphFilterFactory proximity search error

2018-03-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402325#comment-16402325
 ] 

Diogo Guilherme Leão Edelmuth commented on SOLR-11022:
--

 

Hi, [~jimczi],

 

Is the following file the reason for this error? 

{color:#33}[lucene-solr/solr/core/src/java/org/apache/solr/parser/SolrQueryParserBase.java|[https://github.com/apache/lucene-solr/blob/83753d0a2ae5bdd00649f43e355b5a43c6709917/solr/core/src/java/org/apache/solr/parser/SolrQueryParserBase.java]]{color}

 

{color:#33}It just seems to be leaving the SpanNearQuery out.{color}

If so, could we not just add the following, so that it also applies the slop to 
SpanNearQuery?:

 
{code:java}
else if (query instanceof SpanNearQuery) {
    SpanNearQuery snq = (SpanNearQuery)query;
 if (slop != snq.getSlop()) {
  query = new SpanNearQuery.Builder(snq).setSlop(slop).build();
    }
  }
{code}
SpanNearQuery seems to have the same method for applying the slop.

 

Here is how the code is today:
{code:java}
/**
   * Base implementation delegates to {@link 
#getFieldQuery(String,String,boolean,boolean)}.
   * This method may be overridden, for example, to return
   * a SpanNearQuery instead of a PhraseQuery.
   *
   */
  protected Query getFieldQuery(String field, String queryText, int slop)
    throws SyntaxError {
    Query query = getFieldQuery(field, queryText, true, false);
 
    // only set slop of the phrase query was a result of this parser
    // and not a sub-parser.
    if (subQParser == null) {
  if (query instanceof PhraseQuery) { 
<<==
    PhraseQuery pq = (PhraseQuery) query;
    Term[] terms = pq.getTerms();
    int[] positions = pq.getPositions();
    PhraseQuery.Builder builder = new PhraseQuery.Builder();
    for (int i = 0; i < terms.length; ++i) {
  builder.add(terms[i], positions[i]);
    }
    builder.setSlop(slop);
    query = builder.build();
  } else if (query instanceof MultiPhraseQuery) {
<<
    MultiPhraseQuery mpq = (MultiPhraseQuery)query;
 
    if (slop != mpq.getSlop()) {
  query = new MultiPhraseQuery.Builder(mpq).setSlop(slop).build();
    }
  }
    }
    return query;
  }
{code}
 

Thanks for the attention again!

> SynonymGraphFilterFactory proximity search error
> 
>
> Key: SOLR-11022
> URL: https://issues.apache.org/jira/browse/SOLR-11022
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 6.6
>Reporter: Diogo Guilherme Leão Edelmuth
>Priority: Major
>  Labels: span, synonym
>
> There seems to be an issue when doing proximity searches that include terms 
> that have multi-word synonyms.
> Example:
> consider there's is configured in synonyms.txt
> (
> grand mother, grandmother
> grandfather, granddad
> )
> and there's an indexed field with: (My mother and my grandmother went...)
> Proximity search with: ("mother grandmother"~8)
> won't return the file, while ("father grandfather"~8) does return the 
> analogous file.
> I am not a developer of Solr, so pardon if I am wrong, but I ran it with 
> debug=query and saw that when proximity searches are done with multi-term 
> synonyms, the called function is spanNearQuery: 
> "parsedquery":"SpanNearQuery(spanNear([laudo:mother,
> spanOr([laudo:grand mother, laudo:grandmother])],*0*, true))"
> while proximity searches with one-term synonyms are executed with:
> "MultiPhraseQuery(laudo:\"father (grandfather granddad)\"~10)"
> Note that the SpanNearQuery is called with a slope parameter of 0, no matter 
> what is passed after the tilde. So if I search the exact phrase it does match.
> Here is my field-type, just in case:
>  class="solr.TextField" positionIncrementGap="100">
> 
> 
> 
>  words="lang/stopwords_pt.txt" ignoreCase="true"/>
> 
> 
> 
>  class="solr.LowerCaseFilterFactory"/>
>  words="lang/stopwords_pt.txt" ignoreCase="true"/> class="solr.ASCIIFoldingFilterFactory" preserveOriginal="true"/>
>  ignoreCase="true" synonyms="synonyms_radex.txt"/>
> 
> 
> 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11629) CloudSolrClient.Builder should accept a zk host

2018-03-16 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11629?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski resolved SOLR-11629.

   Resolution: Fixed
Fix Version/s: master (8.0)
   7.3

> CloudSolrClient.Builder should accept a zk host
> ---
>
> Key: SOLR-11629
> URL: https://issues.apache.org/jira/browse/SOLR-11629
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Jason Gerlowski
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-11629.patch, SOLR-11629.patch, SOLR-11629.patch, 
> SOLR-11629.patch, SOLR-11629.patch
>
>
> Today we need to create an empty builder and then wither pass zkHost or 
> withSolrUrl
> {code}
> SolrClient solrClient = new 
> CloudSolrClient.Builder().withZkHost("localhost:9983").build();
> solrClient.request(updateRequest, "gettingstarted");
> {code}
> What if we have two constructors , one that accepts a zkHost and one that 
> accepts a SolrUrl .
> The advantages that I can think of are:
> - It will be obvious to users that we support two mechanisms of creating a 
> CloudSolrClient . The SolrUrl option is cool and applications don't need to 
> know about ZooKeeper and new users will learn about this . Maybe our 
> example's on the ref guide should use this? 
> - Today people can set both zkHost and solrUrl  but CloudSolrClient can only 
> utilize one of them
> HttpClient's Builder accepts the host 
> {code}
> HttpSolrClient client = new 
> HttpSolrClient.Builder("http://localhost:8983/solr;).build();
> client.request(updateRequest, "techproducts");
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12059) Unable to rename solr.xml

2018-03-16 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402302#comment-16402302
 ] 

Gus Heck edited comment on SOLR-12059 at 3/16/18 6:20 PM:
--

I suspect the OP's goal is probably to make it possible to tell by inspection 
what version is presently deployed (and did it get properly updated during 
deploy etc). However I might suggest adding an xml comment in the file with 
that info (either via the build, or perhaps something from 
[http://svnbook.red-bean.com/en/1.7/svn.advanced.props.special.keywords.html] 
would be sufficient)...  rather than renaming the file itself. 

Edit: Ah I missed one of the comments above, rollback is a goal too. In which 
case Dave Smiley's suggestion seems best. 


was (Author: gus_heck):
I suspect the OP's goal is probably to make it possible to tell by inspection 
what version is presently deployed (and did it get properly updated during 
deploy etc). However I might suggest adding an xml comment in the file with 
that info (either via the build, or perhaps something from 
[http://svnbook.red-bean.com/en/1.7/svn.advanced.props.special.keywords.html] 
would be sufficient)...  rather than renaming the file itself. 

> Unable to rename solr.xml
> -
>
> Key: SOLR-12059
> URL: https://issues.apache.org/jira/browse/SOLR-12059
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5.1
> Environment: Renaming of solr,xml in the $SOLR_HOME directory
>Reporter: Edwin Yeo Zheng Lin
>Priority: Major
>
> I am able to rename the flie names like solrconfig.xml and solr.log to custom 
> names like myconfig.xml and my.log quite seamlessly. 
> However, I am not able to rename the same for solr.xml. Understand that the 
> solr.xml is hard-coded at the SolrXmlConfig.java. Meaning it requires a 
> re-compile of the Jar file in order to rename it.
> Since we can rename files like solrconfig.xml from the properties files, so 
> we should do the same for solr.xml?
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12067) AutoAddReplicas default 30 second wait time is too low

2018-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402307#comment-16402307
 ] 

ASF subversion and git services commented on SOLR-12067:


Commit 0c4218b6e45f238941c5a4eadc57b5d530cdb8ea in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0c4218b ]

SOLR-12067: omitted correct information about where to define autoAddReplica 
trigger param


> AutoAddReplicas default 30 second wait time is too low
> --
>
> Key: SOLR-12067
> URL: https://issues.apache.org/jira/browse/SOLR-12067
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12067-test-fix.patch, SOLR-12067.patch
>
>
> If I create a collection with autoAddReplicas=true in Solr 7.x a 
> AutoAddReplicasPlanAction get's created with a waitFor=30 seconds
> The default should be increased as a JVM which is down for more than 
> 30seconds can cause the framework to add a new replica on another node
> With HDFS this was a cheap operation as it only involved create a core and 
> pointing it to the same index directory. 
> But for non shared file systems, this is a very expensive operation and can 
> potentially move large indexes around so maybe we should have a higher default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12067) AutoAddReplicas default 30 second wait time is too low

2018-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402308#comment-16402308
 ] 

ASF subversion and git services commented on SOLR-12067:


Commit 06bdd4d42f34a83eec28a00d391a35b9b904303f in lucene-solr's branch 
refs/heads/branch_7_3 from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=06bdd4d ]

SOLR-12067: omitted correct information about where to define autoAddReplica 
trigger param


> AutoAddReplicas default 30 second wait time is too low
> --
>
> Key: SOLR-12067
> URL: https://issues.apache.org/jira/browse/SOLR-12067
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12067-test-fix.patch, SOLR-12067.patch
>
>
> If I create a collection with autoAddReplicas=true in Solr 7.x a 
> AutoAddReplicasPlanAction get's created with a waitFor=30 seconds
> The default should be increased as a JVM which is down for more than 
> 30seconds can cause the framework to add a new replica on another node
> With HDFS this was a cheap operation as it only involved create a core and 
> pointing it to the same index directory. 
> But for non shared file systems, this is a very expensive operation and can 
> potentially move large indexes around so maybe we should have a higher default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12114) Changes made to 'autoReplicaFailoverWaitAfterExpiration' in solr.xml is not reflected automatically in the autoAddReplicas autoscaling trigger

2018-03-16 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-12114:


 Summary: Changes made to 'autoReplicaFailoverWaitAfterExpiration' 
in solr.xml is not reflected automatically in the autoAddReplicas autoscaling 
trigger
 Key: SOLR-12114
 URL: https://issues.apache.org/jira/browse/SOLR-12114
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling, SolrCloud
Affects Versions: 7.2, 7.3
Reporter: Shalin Shekhar Mangar
 Fix For: 7.4, master (8.0)


The value of {{autoReplicaFailoverWaitAfterExpiration}} is used as the 
{{waitFor}} automatically when creating the {{.autoAddReplicas}} trigger. But 
changes made to `autoReplicaFailoverWaitAfterExpiration` in solr.xml afterwords 
is not reflected automatically in the autoAddReplicas autoscaling trigger 
configuration.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12067) AutoAddReplicas default 30 second wait time is too low

2018-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402306#comment-16402306
 ] 

ASF subversion and git services commented on SOLR-12067:


Commit 80485cf5175054a01eec6e254abde517d82cac15 in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=80485cf ]

SOLR-12067: omitted correct information about where to define autoAddReplica 
trigger param


> AutoAddReplicas default 30 second wait time is too low
> --
>
> Key: SOLR-12067
> URL: https://issues.apache.org/jira/browse/SOLR-12067
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12067-test-fix.patch, SOLR-12067.patch
>
>
> If I create a collection with autoAddReplicas=true in Solr 7.x a 
> AutoAddReplicasPlanAction get's created with a waitFor=30 seconds
> The default should be increased as a JVM which is down for more than 
> 30seconds can cause the framework to add a new replica on another node
> With HDFS this was a cheap operation as it only involved create a core and 
> pointing it to the same index directory. 
> But for non shared file systems, this is a very expensive operation and can 
> potentially move large indexes around so maybe we should have a higher default



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12059) Unable to rename solr.xml

2018-03-16 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402302#comment-16402302
 ] 

Gus Heck commented on SOLR-12059:
-

I suspect the OP's goal is probably to make it possible to tell by inspection 
what version is presently deployed (and did it get properly updated during 
deploy etc). However I might suggest adding an xml comment in the file with 
that info (either via the build, or perhaps something from 
[http://svnbook.red-bean.com/en/1.7/svn.advanced.props.special.keywords.html] 
would be sufficient)...  rather than renaming the file itself. 

> Unable to rename solr.xml
> -
>
> Key: SOLR-12059
> URL: https://issues.apache.org/jira/browse/SOLR-12059
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5.1
> Environment: Renaming of solr,xml in the $SOLR_HOME directory
>Reporter: Edwin Yeo Zheng Lin
>Priority: Major
>
> I am able to rename the flie names like solrconfig.xml and solr.log to custom 
> names like myconfig.xml and my.log quite seamlessly. 
> However, I am not able to rename the same for solr.xml. Understand that the 
> solr.xml is hard-coded at the SolrXmlConfig.java. Meaning it requires a 
> re-compile of the Jar file in order to rename it.
> Since we can rename files like solrconfig.xml from the properties files, so 
> we should do the same for solr.xml?
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12097) Document the disktype policy attribute and usage of disk space in Collection APIs

2018-03-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-12097.
--
Resolution: Fixed

Thanks Cassandra!

> Document the disktype policy attribute and usage of disk space in Collection 
> APIs
> -
>
> Key: SOLR-12097
> URL: https://issues.apache.org/jira/browse/SOLR-12097
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12097.patch
>
>
> The disktype attribute is now supported in policy and we have collection APIs 
> making use of disk space requirements and throwing exceptions if they cannot 
> be satisfied. We need to document both in the ref guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12097) Document the disktype policy attribute and usage of disk space in Collection APIs

2018-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402297#comment-16402297
 ] 

ASF subversion and git services commented on SOLR-12097:


Commit b823f2e89127171c413add9183278741caad16c0 in lucene-solr's branch 
refs/heads/branch_7_3 from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b823f2e ]

SOLR-12097: Document the diskType policy attribute and usage of disk space in 
Collection APIs

(cherry picked from commit 4c8825b)

(cherry picked from commit d7d278c)


> Document the disktype policy attribute and usage of disk space in Collection 
> APIs
> -
>
> Key: SOLR-12097
> URL: https://issues.apache.org/jira/browse/SOLR-12097
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12097.patch
>
>
> The disktype attribute is now supported in policy and we have collection APIs 
> making use of disk space requirements and throwing exceptions if they cannot 
> be satisfied. We need to document both in the ref guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12097) Document the disktype policy attribute and usage of disk space in Collection APIs

2018-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402291#comment-16402291
 ] 

ASF subversion and git services commented on SOLR-12097:


Commit d7d278c44abc1f63983cc5678035daf00391bb96 in lucene-solr's branch 
refs/heads/branch_7x from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d7d278c ]

SOLR-12097: Document the diskType policy attribute and usage of disk space in 
Collection APIs

(cherry picked from commit 4c8825b)


> Document the disktype policy attribute and usage of disk space in Collection 
> APIs
> -
>
> Key: SOLR-12097
> URL: https://issues.apache.org/jira/browse/SOLR-12097
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12097.patch
>
>
> The disktype attribute is now supported in policy and we have collection APIs 
> making use of disk space requirements and throwing exceptions if they cannot 
> be satisfied. We need to document both in the ref guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12097) Document the disktype policy attribute and usage of disk space in Collection APIs

2018-03-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402288#comment-16402288
 ] 

ASF subversion and git services commented on SOLR-12097:


Commit 4c8825b6c67ce9f2bde2fbfae8cd42c22b670470 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4c8825b ]

SOLR-12097: Document the diskType policy attribute and usage of disk space in 
Collection APIs


> Document the disktype policy attribute and usage of disk space in Collection 
> APIs
> -
>
> Key: SOLR-12097
> URL: https://issues.apache.org/jira/browse/SOLR-12097
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12097.patch
>
>
> The disktype attribute is now supported in policy and we have collection APIs 
> making use of disk space requirements and throwing exceptions if they cannot 
> be satisfied. We need to document both in the ref guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12113) Disk free requirements for Collection APIs are not enforced if autoscaling policy is not present

2018-03-16 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-12113:


 Summary: Disk free requirements for Collection APIs are not 
enforced if autoscaling policy is not present
 Key: SOLR-12113
 URL: https://issues.apache.org/jira/browse/SOLR-12113
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling, SolrCloud
Affects Versions: 7.3
Reporter: Shalin Shekhar Mangar
 Fix For: 7.4, master (8.0)


Disk free requirements for Collection APIs are not enforced if autoscaling 
policy is not present. But as long as the user is not using the old replica 
placement rules framework, we should ensure that disk free requirements are 
respected. It should not matter whether there is a policy or not. A new cluster 
with default cluster preferences should have these protections in place.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12097) Document the disktype policy attribute and usage of disk space in Collection APIs

2018-03-16 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12097?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402274#comment-16402274
 ] 

Cassandra Targett commented on SOLR-12097:
--

Looks good, Shalin - +1.

> Document the disktype policy attribute and usage of disk space in Collection 
> APIs
> -
>
> Key: SOLR-12097
> URL: https://issues.apache.org/jira/browse/SOLR-12097
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12097.patch
>
>
> The disktype attribute is now supported in policy and we have collection APIs 
> making use of disk space requirements and throwing exceptions if they cannot 
> be satisfied. We need to document both in the ref guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12097) Document the disktype policy attribute and usage of disk space in Collection APIs

2018-03-16 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-12097:
-
Attachment: SOLR-12097.patch

> Document the disktype policy attribute and usage of disk space in Collection 
> APIs
> -
>
> Key: SOLR-12097
> URL: https://issues.apache.org/jira/browse/SOLR-12097
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: SOLR-12097.patch
>
>
> The disktype attribute is now supported in policy and we have collection APIs 
> making use of disk space requirements and throwing exceptions if they cannot 
> be satisfied. We need to document both in the ref guide.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



subscribe

2018-03-16 Thread Asher Shih
subscribe

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI

2018-03-16 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16402212#comment-16402212
 ] 

Mark Miller commented on SOLR-8207:
---

bq. I think we should avoid renaming "Tree" to "ZooKeeper".

+1 - I'm not against a rename, but there must be a better one for Solr that is 
not impl related.

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Upayavira
>Priority: Major
> Attachments: nodes-tab.png
>
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >