[jira] [Commented] (SOLR-12152) Split up TriggerIntegrationTest into multiple tests to isolate and increase reliability

2018-03-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420170#comment-16420170
 ] 

ASF subversion and git services commented on SOLR-12152:


Commit ed9e5eb75b38fb24c1d32e885941d065d284ffa0 in lucene-solr's branch 
refs/heads/master from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ed9e5eb ]

SOLR-12152: Split up TriggerIntegrationTest into multiple tests to isolate and 
increase reliability


> Split up TriggerIntegrationTest into multiple tests to isolate and increase 
> reliability
> ---
>
> Key: SOLR-12152
> URL: https://issues.apache.org/jira/browse/SOLR-12152
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud, Tests
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>
> TriggerIntegrationTest is big enough already. It is time to split it up into 
> multiple test classes. This will keep one test method from affecting the 
> others and help tone down the logs in case we need to troubleshoot further.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12028) BadApple and AwaitsFix annotations usage

2018-03-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420149#comment-16420149
 ] 

ASF subversion and git services commented on SOLR-12028:


Commit 95e1903d0645f75bbaec7f34901c3199bc254ed5 in lucene-solr's branch 
refs/heads/branch_7x from Erick Erickson
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=95e1903 ]

SOLR-12028: BadApple and AwaitsFix annotations usage

(cherry picked from commit 2370731)


> BadApple and AwaitsFix annotations usage
> 
>
> Key: SOLR-12028
> URL: https://issues.apache.org/jira/browse/SOLR-12028
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12016-buildsystem.patch, SOLR-12028-3-Mar.patch, 
> SOLR-12028-sysprops-reproduce.patch, SOLR-12028.patch, SOLR-12028.patch
>
>
> There's a long discussion of this topic at SOLR-12016. Here's a summary:
> - BadApple annotations are used for tests that intermittently fail, say < 30% 
> of the time. Tests that fail more often shold be moved to AwaitsFix. This is, 
> of course, a judgement call
> - AwaitsFix annotations are used for tests that, for some reason, the problem 
> can't be fixed immediately. Likely reasons are third-party dependencies, 
> extreme difficulty tracking down, dependency on another JIRA etc.
> Jenkins jobs will typically run with BadApple disabled to cut down on noise. 
> Periodically Jenkins jobs will be run with BadApples enabled so BadApple 
> tests won't be lost and reports can be generated. Tests that run with 
> BadApples disabled that fail require _immediate_ attention.
> The default for developers is that BadApple is enabled.
> If you are working on one of these tests and cannot get the test to fail 
> locally, it is perfectly acceptable to comment the annotation out. You should 
> let the dev list know that this is deliberate.
> This JIRA is a placeholder for BadApple tests to point to between the times 
> they're identified as BadApple and they're either fixed or changed to 
> AwaitsFix or assigned their own JIRA.
> I've assigned this to myself to track so I don't lose track of it. No one 
> person will fix all of these issues, this will be an ongoing technical debt 
> cleanup effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12028) BadApple and AwaitsFix annotations usage

2018-03-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420147#comment-16420147
 ] 

ASF subversion and git services commented on SOLR-12028:


Commit 23707314dd7fa67c7dd089d8fb6c1bece4817408 in lucene-solr's branch 
refs/heads/master from Erick Erickson
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2370731 ]

SOLR-12028: BadApple and AwaitsFix annotations usage


> BadApple and AwaitsFix annotations usage
> 
>
> Key: SOLR-12028
> URL: https://issues.apache.org/jira/browse/SOLR-12028
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12016-buildsystem.patch, SOLR-12028-3-Mar.patch, 
> SOLR-12028-sysprops-reproduce.patch, SOLR-12028.patch, SOLR-12028.patch
>
>
> There's a long discussion of this topic at SOLR-12016. Here's a summary:
> - BadApple annotations are used for tests that intermittently fail, say < 30% 
> of the time. Tests that fail more often shold be moved to AwaitsFix. This is, 
> of course, a judgement call
> - AwaitsFix annotations are used for tests that, for some reason, the problem 
> can't be fixed immediately. Likely reasons are third-party dependencies, 
> extreme difficulty tracking down, dependency on another JIRA etc.
> Jenkins jobs will typically run with BadApple disabled to cut down on noise. 
> Periodically Jenkins jobs will be run with BadApples enabled so BadApple 
> tests won't be lost and reports can be generated. Tests that run with 
> BadApples disabled that fail require _immediate_ attention.
> The default for developers is that BadApple is enabled.
> If you are working on one of these tests and cannot get the test to fail 
> locally, it is perfectly acceptable to comment the annotation out. You should 
> let the dev list know that this is deliberate.
> This JIRA is a placeholder for BadApple tests to point to between the times 
> they're identified as BadApple and they're either fixed or changed to 
> AwaitsFix or assigned their own JIRA.
> I've assigned this to myself to track so I don't lose track of it. No one 
> person will fix all of these issues, this will be an ongoing technical debt 
> cleanup effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6399) Implement unloadCollection in the Collections API

2018-03-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420139#comment-16420139
 ] 

Erick Erickson commented on SOLR-6399:
--

I don't think this can be dealt with by backup/restore for the reasons Yago 
outlines. Loading a collection might take a few minutes, whereas a 
multi-terabyte index could take hours/days.


> Implement unloadCollection in the Collections API
> -
>
> Key: SOLR-6399
> URL: https://issues.apache.org/jira/browse/SOLR-6399
> Project: Solr
>  Issue Type: New Feature
>Reporter: dfdeshom
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 6.0
>
>
> There is currently no way to unload a collection without deleting its 
> contents. There should be a way in the collections API to unload a collection 
> and reload it later, as needed.
> A use case for this is the following: you store logs by day, with each day 
> having its own collection. You are required to store up to 2 years of data, 
> which adds up to 730 collections.  Most of the time, you'll want to have 3 
> days of data loaded for search. Having just 3 collections loaded into memory, 
> instead of 730 will make managing Solr easier.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6634) The Collections API should have a UNLOAD and LOAD command

2018-03-29 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-6634.
--
Resolution: Duplicate

> The Collections API should have a UNLOAD and LOAD command
> -
>
> Key: SOLR-6634
> URL: https://issues.apache.org/jira/browse/SOLR-6634
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Priority: Major
>
> It would be useful if we allowed users to take their collection offline and 
> bring it back online.
> The UNLOAD command can just unload all the cores in the collection, leaving 
> the ZK information in place.
> Then the LOAD command can just use the clusterstate/state.json file to fire 
> CREATE core commands. I guess it should fail if the node hosting the core 
> earlier is not present anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12161) CloudSolrClient with basic auth enabled will update even if no credentials supplied

2018-03-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420132#comment-16420132
 ] 

Erick Erickson commented on SOLR-12161:
---

Pretty sure these are the exact same issue.

> CloudSolrClient with basic auth enabled will update even if no credentials 
> supplied
> ---
>
> Key: SOLR-12161
> URL: https://issues.apache.org/jira/browse/SOLR-12161
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 7.3
>Reporter: Erick Erickson
>Assignee: Noble Paul
>Priority: Major
> Attachments: AuthUpdateTest.java, tests.patch
>
>
> This is an offshoot of SOLR-9399. When I was writing a test, if I create a 
> cluster with basic authentication set up, I can _still_ add documents to a 
> collection even without credentials being set in the request.
> However, simple queries, commits etc. all fail without credentials set in the 
> request.
> I'll attach a test class that illustrates the problem.
> If I use a new HttpSolrClient instead of a CloudSolrClient, then the update 
> request fails as expected.
> [~noblepaul] do you have any insight here? Possibly something with splitting 
> up the update request to go to each individual shard?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12161) CloudSolrClient with basic auth enabled will update even if no credentials supplied

2018-03-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420131#comment-16420131
 ] 

Erick Erickson commented on SOLR-12161:
---

[~steve_rowe] Sure looks like exactly the same problem. I'll link it here for 
the time being, but I think it's _highly_ likely that these are one and the 
same.

> CloudSolrClient with basic auth enabled will update even if no credentials 
> supplied
> ---
>
> Key: SOLR-12161
> URL: https://issues.apache.org/jira/browse/SOLR-12161
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 7.3
>Reporter: Erick Erickson
>Assignee: Noble Paul
>Priority: Major
> Attachments: AuthUpdateTest.java, tests.patch
>
>
> This is an offshoot of SOLR-9399. When I was writing a test, if I create a 
> cluster with basic authentication set up, I can _still_ add documents to a 
> collection even without credentials being set in the request.
> However, simple queries, commits etc. all fail without credentials set in the 
> request.
> I'll attach a test class that illustrates the problem.
> If I use a new HttpSolrClient instead of a CloudSolrClient, then the update 
> request fails as expected.
> [~noblepaul] do you have any insight here? Possibly something with splitting 
> up the update request to go to each individual shard?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12166) Race condition in rejoinElection and registering replica

2018-03-29 Thread Cao Manh Dat (JIRA)
Cao Manh Dat created SOLR-12166:
---

 Summary: Race condition in rejoinElection and registering replica
 Key: SOLR-12166
 URL: https://issues.apache.org/jira/browse/SOLR-12166
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Cao Manh Dat
Assignee: Cao Manh Dat


I found this case when beasting LIROnShardRestartTest, the case here is
 * ReplicaA may be the new leader - try and sync with other replicas and 
somehow failed to become the leader (ex: LIR flag).
 * ReplicaA call rejoinElection, therefore, starting the recovery process
 * After rejoinElection, it somehow wins the election (ex: all replicas 
participated in the election, therefore LIR flag is cleared).
 * ReplicaA register itself as ACTIVE after winning the election
 * The recovery process above publish ReplicaA to DOWN or RECOVERY
 * We end up with a dead-end shard with a DOWN leader, hence other replicas 
can't do recovery with replicaA



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10616) use more ant variables in ref guide pages: particular for javadoc & third-party lib versions

2018-03-29 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420109#comment-16420109
 ] 

Hoss Man commented on SOLR-10616:
-

thanks for reminding me about this issue Steve.

 

I think the things still outstanding is the java version and the "java" javadoc 
links ... but i'll try to tackle those soon.

> use more ant variables in ref guide pages: particular for javadoc & 
> third-party lib versions
> 
>
> Key: SOLR-10616
> URL: https://issues.apache.org/jira/browse/SOLR-10616
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Priority: Major
>
> we already use ant variables for the lucene/solr version when building 
> lucene/solr javadoc links, but it would be nice if we could slurp in the JDK 
> javadoc URLs for the current java version & the versions.properties values 
> for all third-party deps as well, so that links to things like the zookeeper 
> guide, or the tika guide, or the javadocs for DateInstance would always be 
> "current"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10616) use more ant variables in ref guide pages: particular for javadoc & third-party lib versions

2018-03-29 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420089#comment-16420089
 ] 

Steve Rowe commented on SOLR-10616:
---

Is this issue still needed, or was it effectively handled by SOLR-12118 ?

> use more ant variables in ref guide pages: particular for javadoc & 
> third-party lib versions
> 
>
> Key: SOLR-10616
> URL: https://issues.apache.org/jira/browse/SOLR-10616
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Hoss Man
>Priority: Major
>
> we already use ant variables for the lucene/solr version when building 
> lucene/solr javadoc links, but it would be nice if we could slurp in the JDK 
> javadoc URLs for the current java version & the versions.properties values 
> for all third-party deps as well, so that links to things like the zookeeper 
> guide, or the tika guide, or the javadocs for DateInstance would always be 
> "current"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12161) CloudSolrClient with basic auth enabled will update even if no credentials supplied

2018-03-29 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420086#comment-16420086
 ] 

Steve Rowe commented on SOLR-12161:
---

Is this the same problem as the one described on SOLR-9804?

> CloudSolrClient with basic auth enabled will update even if no credentials 
> supplied
> ---
>
> Key: SOLR-12161
> URL: https://issues.apache.org/jira/browse/SOLR-12161
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 7.3
>Reporter: Erick Erickson
>Assignee: Noble Paul
>Priority: Major
> Attachments: AuthUpdateTest.java, tests.patch
>
>
> This is an offshoot of SOLR-9399. When I was writing a test, if I create a 
> cluster with basic authentication set up, I can _still_ add documents to a 
> collection even without credentials being set in the request.
> However, simple queries, commits etc. all fail without credentials set in the 
> request.
> I'll attach a test class that illustrates the problem.
> If I use a new HttpSolrClient instead of a CloudSolrClient, then the update 
> request fails as expected.
> [~noblepaul] do you have any insight here? Possibly something with splitting 
> up the update request to go to each individual shard?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6634) The Collections API should have a UNLOAD and LOAD command

2018-03-29 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420074#comment-16420074
 ] 

Steve Rowe commented on SOLR-6634:
--

Can this issue be closed as a duplicate of SOLR-6399 ?

> The Collections API should have a UNLOAD and LOAD command
> -
>
> Key: SOLR-6634
> URL: https://issues.apache.org/jira/browse/SOLR-6634
> Project: Solr
>  Issue Type: Improvement
>Reporter: Varun Thacker
>Priority: Major
>
> It would be useful if we allowed users to take their collection offline and 
> bring it back online.
> The UNLOAD command can just unload all the cores in the collection, leaving 
> the ZK information in place.
> Then the LOAD command can just use the clusterstate/state.json file to fire 
> CREATE core commands. I guess it should fail if the node hosting the core 
> earlier is not present anymore.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5775) Disable constantly failing solr tests

2018-03-29 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-5775.
--
Resolution: Done

Resolving in favor of modern efforts at SOLR-12037 (née SOLR-12016) and 
SOLR-12028.

> Disable constantly failing solr tests
> -
>
> Key: SOLR-5775
> URL: https://issues.apache.org/jira/browse/SOLR-5775
> Project: Solr
>  Issue Type: Bug
>  Components: Build
>Reporter: Robert Muir
>Priority: Major
>
> Currently, solr tests are failing 90%+ of the time. We've been through this 
> before many times, the argument is always that someone is looking at the 
> failures and knows which ones are bad.
> This argument is a lie. Nobody is watching these failures, or 
> DistributedQueryComponentOptimizationTest would not have failed repeatedly 
> for two straight days when the fix was trivial (I fixed this last night: 
> http://svn.apache.org/r1571930)
> Its frustrating to me as a committer, solr tests *NEVER* pass on my machine, 
> no matter how many times I try. How can i possibly commit something without 
> knowing i am making the situation even worse?
> This is all a big problem for developers, release managers, even users of the 
> project. The test suite should pass.
> The old argument that "solr tests are allowed to fail" is no longer valid. I 
> will disable all constantly failing tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7246 - Still Unstable!

2018-03-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7246/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
SimTimeSource:50.0 time diff=14751850

Stack Trace:
java.lang.AssertionError: SimTimeSource:50.0 time diff=14751850
at 
__randomizedtesting.SeedInfo.seed([797212DD0C583F18:411E61F898889D5E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.common.util.TestTimeSource.doTestEpochTime(TestTimeSource.java:48)
at 
org.apache.solr.common.util.TestTimeSource.testEpochTime(TestTimeSource.java:32)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.common.util.TestTimeSource.testEpochTime

Error Message:
NanoTimeSource time diff=2858952

Stack Trace:
java.lang.AssertionError: NanoTimeSource time diff=2858952
   

[jira] [Commented] (SOLR-12162) CorePropertiesLocator Exception message contains a typo when unable to create Solr Core

2018-03-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420024#comment-16420024
 ] 

ASF subversion and git services commented on SOLR-12162:


Commit 9935af16a844411a93840f31f82a56f2c80025a0 in lucene-solr's branch 
refs/heads/branch_7x from Erick Erickson
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9935af1 ]

SOLR-12162: CorePropertiesLocator Exception message contains a typo when unable 
to create Solr Core

(cherry picked from commit e55b7e9)


> CorePropertiesLocator Exception message contains a typo when unable to create 
> Solr Core
> ---
>
> Key: SOLR-12162
> URL: https://issues.apache.org/jira/browse/SOLR-12162
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ryan Zwiefelhofer
>Priority: Trivial
>  Labels: patch, pull-request-available
> Fix For: master (8.0)
>
> Attachments: SOLR-12162_corepropertieslocator_typo.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CorePropertiesLocator has a typo in the SolrException thrown when unable to 
> create a new core. 
> ([https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CorePropertiesLocator.java#L69)]
> There should be a space before the `as` so that the exception message reads 
> correctly.
> Before:
> {{Could not create a new core in /coredescriptor/instancedirectoryas another 
> core is already defined there}}
>                                                                               
>                                                  ^ no space here
>  
> After:
> {{Could not create a new core in /coredescriptor/instancedirectory as another 
> core is already defined there}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12162) CorePropertiesLocator Exception message contains a typo when unable to create Solr Core

2018-03-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420020#comment-16420020
 ] 

ASF subversion and git services commented on SOLR-12162:


Commit e55b7e9911165fdf99682990c743e9bcd6cbd4f9 in lucene-solr's branch 
refs/heads/master from Erick Erickson
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e55b7e9 ]

SOLR-12162: CorePropertiesLocator Exception message contains a typo when unable 
to create Solr Core


> CorePropertiesLocator Exception message contains a typo when unable to create 
> Solr Core
> ---
>
> Key: SOLR-12162
> URL: https://issues.apache.org/jira/browse/SOLR-12162
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ryan Zwiefelhofer
>Priority: Trivial
>  Labels: patch, pull-request-available
> Fix For: master (8.0)
>
> Attachments: SOLR-12162_corepropertieslocator_typo.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CorePropertiesLocator has a typo in the SolrException thrown when unable to 
> create a new core. 
> ([https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CorePropertiesLocator.java#L69)]
> There should be a space before the `as` so that the exception message reads 
> correctly.
> Before:
> {{Could not create a new core in /coredescriptor/instancedirectoryas another 
> core is already defined there}}
>                                                                               
>                                                  ^ no space here
>  
> After:
> {{Could not create a new core in /coredescriptor/instancedirectory as another 
> core is already defined there}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12165) Ref Guide: DisMax default mm param value is improperly documented as 100%

2018-03-29 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420015#comment-16420015
 ] 

Steve Rowe edited comment on SOLR-12165 at 3/30/18 12:51 AM:
-

I've attached a patch that corrects the dismax doc, and adds an {{mm}} param 
item to the edismax doc, referring to dismax's doc, but spelling out its 
different default {{mm}} handling.

I'll hold off committing this for a day or so, in case anybody would like to 
review.

I don't think this should necessarily trigger a 7.3 ref guide RC respin, but it 
should be included if a respin occurs for some other reason. 


was (Author: steve_rowe):
I've attached a patch that corrects the dismax doc, and adds a {{mm}} param 
item to the edismax doc, referring to dismax's doc, but spelling out its 
different default {{mm}} handling.

I'll hold off committing this for a day or so, in case anybody would like to 
review.

I don't think this should necessarily trigger a 7.3 ref guide RC respin, but it 
should be included if a respin occurs for some other reason. 

> Ref Guide: DisMax default mm param value is improperly documented as 100%
> -
>
> Key: SOLR-12165
> URL: https://issues.apache.org/jira/browse/SOLR-12165
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: SOLR-12165.patch
>
>
> {{DisMaxQParser.parseMinShouldMatch()}} sets default {{mm}} to 100% if 
> {{q.op}}=="AND", and to 0% otherwise.
> {{ExtendedDismaxQParser.parseOriginalQuery()}} sets default {{mm}} to 0% if 
> there are explicit operators other than "AND" in the query (see SOLR-2649 and 
> SOLR-8812), and otherwise falls through to dismax’s logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12165) Ref Guide: DisMax default mm param value is improperly documented as 100%

2018-03-29 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420015#comment-16420015
 ] 

Steve Rowe commented on SOLR-12165:
---

I've attached a patch that corrects the dismax doc, and adds a {{mm}} param 
item to the edismax doc, referring to dismax's doc, but spelling out its 
different default {{mm}} handling.

I'll hold off committing this for a day or so, in case anybody would like to 
review.

I don't think this should necessarily trigger a 7.3 ref guide RC respin, but it 
should be included if a respin occurs for some other reason. 

> Ref Guide: DisMax default mm param value is improperly documented as 100%
> -
>
> Key: SOLR-12165
> URL: https://issues.apache.org/jira/browse/SOLR-12165
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: SOLR-12165.patch
>
>
> {{DisMaxQParser.parseMinShouldMatch()}} sets default {{mm}} to 100% if 
> {{q.op}}=="AND", and to 0% otherwise.
> {{ExtendedDismaxQParser.parseOriginalQuery()}} sets default {{mm}} to 0% if 
> there are explicit operators other than "AND" in the query (see SOLR-2649 and 
> SOLR-8812), and otherwise falls through to dismax’s logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.3 - Build # 38 - Still unstable

2018-03-29 Thread Apache Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 76, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-12165) Ref Guide: DisMax default mm param value is improperly documented as 100%

2018-03-29 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-12165:
--
Attachment: SOLR-12165.patch

> Ref Guide: DisMax default mm param value is improperly documented as 100%
> -
>
> Key: SOLR-12165
> URL: https://issues.apache.org/jira/browse/SOLR-12165
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: SOLR-12165.patch
>
>
> {{DisMaxQParser.parseMinShouldMatch()}} sets default {{mm}} to 100% if 
> {{q.op}}=="AND", and to 0% otherwise.
> {{ExtendedDismaxQParser.parseOriginalQuery()}} sets default {{mm}} to 0% if 
> there are explicit operators other than "AND" in the query (see SOLR-2649 and 
> SOLR-8812), and otherwise falls through to dismax’s logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12165) Ref Guide: DisMax default mm param value is improperly documented as 100%

2018-03-29 Thread Steve Rowe (JIRA)
Steve Rowe created SOLR-12165:
-

 Summary: Ref Guide: DisMax default mm param value is improperly 
documented as 100%
 Key: SOLR-12165
 URL: https://issues.apache.org/jira/browse/SOLR-12165
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Steve Rowe
Assignee: Steve Rowe
 Attachments: SOLR-12165.patch

{{DisMaxQParser.parseMinShouldMatch()}} sets default {{mm}} to 100% if 
{{q.op}}=="AND", and to 0% otherwise.

{{ExtendedDismaxQParser.parseOriginalQuery()}} sets default {{mm}} to 0% if 
there are explicit operators other than "AND" in the query (see SOLR-2649 and 
SOLR-8812), and otherwise falls through to dismax’s logic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12150) CdcrBidirectionalTest.testBiDir() reproducing failure

2018-03-29 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16420010#comment-16420010
 ] 

Amrit Sarkar commented on SOLR-12150:
-

[~steve_rowe]: added patch with the fix for the test, added basic sanity size 
check, refactored a little around atomic test. thank you for reporting this.

> CdcrBidirectionalTest.testBiDir() reproducing failure
> -
>
> Key: SOLR-12150
> URL: https://issues.apache.org/jira/browse/SOLR-12150
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, Tests
>Reporter: Steve Rowe
>Priority: Major
> Attachments: SOLR-12150.patch
>
>
> From [https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/538/] (also 
> reproduces for me on Linux):
> {noformat}
> Checking out Revision e80ee7fff85918e68c212757c0e6c4bddbdb5ab6 
> (refs/remotes/origin/branch_7x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=CdcrBidirectionalTest -Dtests.method=testBiDir 
> -Dtests.seed=38DB802FA0173E8D -Dtests.slow=true -Dtests.locale=ro-RO 
> -Dtests.timezone=Etc/GMT-8 -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   23.3s J0 | CdcrBidirectionalTest.testBiDir <<<
>[junit4]> Throwable #1: java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([38DB802FA0173E8D:7D0070CDB83972CF]:0)
>[junit4]>  at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>[junit4]>  at java.util.ArrayList.get(ArrayList.java:429)
>[junit4]>  at 
> org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.getDocFieldValue(CdcrBidirectionalTest.java:227)
>[junit4]>  at 
> org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir(CdcrBidirectionalTest.java:200)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70), 
> sim=RandomSimilarity(queryNorm=false): {}, locale=ro-RO, timezone=Etc/GMT-8
>[junit4]   2> NOTE: Mac OS X 10.11.6 x86_64/Oracle Corporation 1.8.0_144 
> (64-bit)/cpus=3,threads=1,free=160960440,total=347418624
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12150) CdcrBidirectionalTest.testBiDir() reproducing failure

2018-03-29 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-12150:

Attachment: SOLR-12150.patch

> CdcrBidirectionalTest.testBiDir() reproducing failure
> -
>
> Key: SOLR-12150
> URL: https://issues.apache.org/jira/browse/SOLR-12150
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, Tests
>Reporter: Steve Rowe
>Priority: Major
> Attachments: SOLR-12150.patch
>
>
> From [https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/538/] (also 
> reproduces for me on Linux):
> {noformat}
> Checking out Revision e80ee7fff85918e68c212757c0e6c4bddbdb5ab6 
> (refs/remotes/origin/branch_7x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=CdcrBidirectionalTest -Dtests.method=testBiDir 
> -Dtests.seed=38DB802FA0173E8D -Dtests.slow=true -Dtests.locale=ro-RO 
> -Dtests.timezone=Etc/GMT-8 -Dtests.asserts=true -Dtests.file.encoding=UTF-8
>[junit4] ERROR   23.3s J0 | CdcrBidirectionalTest.testBiDir <<<
>[junit4]> Throwable #1: java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([38DB802FA0173E8D:7D0070CDB83972CF]:0)
>[junit4]>  at java.util.ArrayList.rangeCheck(ArrayList.java:653)
>[junit4]>  at java.util.ArrayList.get(ArrayList.java:429)
>[junit4]>  at 
> org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.getDocFieldValue(CdcrBidirectionalTest.java:227)
>[junit4]>  at 
> org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir(CdcrBidirectionalTest.java:200)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70), 
> sim=RandomSimilarity(queryNorm=false): {}, locale=ro-RO, timezone=Etc/GMT-8
>[junit4]   2> NOTE: Mac OS X 10.11.6 x86_64/Oracle Corporation 1.8.0_144 
> (64-bit)/cpus=3,threads=1,free=160960440,total=347418624
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12161) CloudSolrClient with basic auth enabled will update even if no credentials supplied

2018-03-29 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-12161:
--
Attachment: tests.patch

> CloudSolrClient with basic auth enabled will update even if no credentials 
> supplied
> ---
>
> Key: SOLR-12161
> URL: https://issues.apache.org/jira/browse/SOLR-12161
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 7.3
>Reporter: Erick Erickson
>Assignee: Noble Paul
>Priority: Major
> Attachments: AuthUpdateTest.java, tests.patch
>
>
> This is an offshoot of SOLR-9399. When I was writing a test, if I create a 
> cluster with basic authentication set up, I can _still_ add documents to a 
> collection even without credentials being set in the request.
> However, simple queries, commits etc. all fail without credentials set in the 
> request.
> I'll attach a test class that illustrates the problem.
> If I use a new HttpSolrClient instead of a CloudSolrClient, then the update 
> request fails as expected.
> [~noblepaul] do you have any insight here? Possibly something with splitting 
> up the update request to go to each individual shard?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12161) CloudSolrClient with basic auth enabled will update even if no credentials supplied

2018-03-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419968#comment-16419968
 ] 

Erick Erickson commented on SOLR-12161:
---

I uploaded a patch I was working on for this, it may be useful [~noble.paul]

And of course I'll be happy to help out however I can

> CloudSolrClient with basic auth enabled will update even if no credentials 
> supplied
> ---
>
> Key: SOLR-12161
> URL: https://issues.apache.org/jira/browse/SOLR-12161
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 7.3
>Reporter: Erick Erickson
>Assignee: Noble Paul
>Priority: Major
> Attachments: AuthUpdateTest.java, tests.patch
>
>
> This is an offshoot of SOLR-9399. When I was writing a test, if I create a 
> cluster with basic authentication set up, I can _still_ add documents to a 
> collection even without credentials being set in the request.
> However, simple queries, commits etc. all fail without credentials set in the 
> request.
> I'll attach a test class that illustrates the problem.
> If I use a new HttpSolrClient instead of a CloudSolrClient, then the update 
> request fails as expected.
> [~noblepaul] do you have any insight here? Possibly something with splitting 
> up the update request to go to each individual shard?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 21721 - Failure!

2018-03-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21721/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.lucene.index.TestIndexingSequenceNumbers.testStressConcurrentCommit

Error Message:
this IndexWriter is closed

Stack Trace:
org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:899)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:913)
at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3513)
at 
org.apache.lucene.index.TestIndexingSequenceNumbers.testStressConcurrentCommit(TestIndexingSequenceNumbers.java:230)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.OutOfMemoryError: Java heap space


FAILED:  org.apache.solr.cloud.autoscaling.ScheduledTriggerTest.testTrigger

Error Message:
expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([82DADCB61BBC0E0F:E111EA3482737D22]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 

[jira] [Updated] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2018-03-29 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11724:

Attachment: SOLR-11724.patch

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.1
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch, 
> SOLR-11724.patch, SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8232) Separate out PendingDeletes from ReadersAndUpdates

2018-03-29 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419865#comment-16419865
 ] 

Simon Willnauer commented on LUCENE-8232:
-

here is a link to a PR for review https://github.com/s1monw/lucene-solr/pull/7

>  Separate out PendingDeletes from ReadersAndUpdates
> ---
>
> Key: LUCENE-8232
> URL: https://issues.apache.org/jira/browse/LUCENE-8232
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8232.patch
>
>
> Today ReadersAndUpdates is tightly coupled with IW and all the handling of 
> pending deletes. This change decouples IW and pending deletes from 
> ReadersAndUpdates and allows expert users to customize how deletes are 
> handled. This is useful or up to a certain extend mandatory to work with 
> soft-deletes to allow merge policies to make the right decisions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8232) Separate out PendingDeletes from ReadersAndUpdates

2018-03-29 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8232:

Attachment: LUCENE-8232.patch

>  Separate out PendingDeletes from ReadersAndUpdates
> ---
>
> Key: LUCENE-8232
> URL: https://issues.apache.org/jira/browse/LUCENE-8232
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8232.patch
>
>
> Today ReadersAndUpdates is tightly coupled with IW and all the handling of 
> pending deletes. This change decouples IW and pending deletes from 
> ReadersAndUpdates and allows expert users to customize how deletes are 
> handled. This is useful or up to a certain extend mandatory to work with 
> soft-deletes to allow merge policies to make the right decisions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #313: SOLR-11924: Added a way to create collection set wat...

2018-03-29 Thread dennisgove
Github user dennisgove commented on the issue:

https://github.com/apache/lucene-solr/pull/313
  
I think following the structure in `LiveNodesListener` makes more sense. 
Perhaps a name of `CloudCollectionsSetListener` or `CloudCollectionsListener`.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8232) Separate out PendingDeletes from ReadersAndUpdates

2018-03-29 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-8232:
---

 Summary:  Separate out PendingDeletes from ReadersAndUpdates
 Key: LUCENE-8232
 URL: https://issues.apache.org/jira/browse/LUCENE-8232
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 7.4, master (8.0)
Reporter: Simon Willnauer
 Fix For: 7.4, master (8.0)


Today ReadersAndUpdates is tightly coupled with IW and all the handling of 
pending deletes. This change decouples IW and pending deletes from 
ReadersAndUpdates and allows expert users to customize how deletes are handled. 
This is useful or up to a certain extend mandatory to work with soft-deletes to 
allow merge policies to make the right decisions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1770 - Still Unstable!

2018-03-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1770/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.search.TestStressRecovery.testStressRecovery

Error Message:
Captured an uncaught exception in thread: Thread[id=27706, name=READER3, 
state=RUNNABLE, group=TGRP-TestStressRecovery]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=27706, name=READER3, state=RUNNABLE, 
group=TGRP-TestStressRecovery]
at 
__randomizedtesting.SeedInfo.seed([F746DE6363639A6:B54E04BBA9DE86A8]:0)
Caused by: java.lang.RuntimeException: java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([F746DE6363639A6]:0)
at 
org.apache.solr.search.TestStressRecovery$2.run(TestStressRecovery.java:332)
Caused by: java.lang.NullPointerException
at org.apache.solr.update.TransactionLog.lookup(TransactionLog.java:520)
at org.apache.solr.update.UpdateLog.lookup(UpdateLog.java:980)
at 
org.apache.solr.handler.component.RealTimeGetComponent.process(RealTimeGetComponent.java:236)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
at 
org.apache.solr.handler.RealTimeGetHandler.handleRequestBody(RealTimeGetHandler.java:46)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2508)
at org.apache.solr.util.TestHarness.query(TestHarness.java:337)
at org.apache.solr.util.TestHarness.query(TestHarness.java:319)
at 
org.apache.solr.search.TestStressRecovery$2.run(TestStressRecovery.java:307)




Build Log:
[...truncated 1846 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/lucene/build/core/test/temp/junit4-J1-20180329_203611_9986907444613648827695.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] codec: DummyCompressingStoredFields, pf: Memory, dvf: Asserting
   [junit4] <<< JVM J1: EOF 

[...truncated 11929 lines...]
   [junit4] Suite: org.apache.solr.search.TestStressRecovery
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J1/temp/solr.search.TestStressRecovery_F746DE6363639A6-001/init-core-data-001
   [junit4]   2> 2508266 INFO  
(SUITE-TestStressRecovery-seed#[F746DE6363639A6]-worker) [] 
o.a.s.c.SolrResourceLoader [null] Added 2 libs to classloader, from paths: 
[/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/test-files/solr/collection1/lib,
 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/test-files/solr/collection1/lib/classes]
   [junit4]   2> 2508283 INFO  
(SUITE-TestStressRecovery-seed#[F746DE6363639A6]-worker) [] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 2508290 INFO  
(SUITE-TestStressRecovery-seed#[F746DE6363639A6]-worker) [] 
o.a.s.s.IndexSchema [null] Schema name=test
   [junit4]   2> 2508342 INFO  
(SUITE-TestStressRecovery-seed#[F746DE6363639A6]-worker) [] 
o.a.s.s.IndexSchema Loaded schema test/1.6 with uniqueid field id
   [junit4]   2> 2508375 INFO  
(SUITE-TestStressRecovery-seed#[F746DE6363639A6]-worker) [] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.node' (registry 'solr.node') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@4c19f931
   [junit4]   2> 2508380 INFO  
(SUITE-TestStressRecovery-seed#[F746DE6363639A6]-worker) [] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jvm' (registry 'solr.jvm') 
enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@4c19f931
   [junit4]   2> 2508380 INFO  
(SUITE-TestStressRecovery-seed#[F746DE6363639A6]-worker) [] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 'solr.jetty' (registry 
'solr.jetty') enabled at server: com.sun.jmx.mbeanserver.JmxMBeanServer@4c19f931
   [junit4]   2> 2508381 INFO  (coreLoadExecutor-8301-thread-1) [] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 
transient cores
   [junit4]   2> 2508381 INFO  (coreLoadExecutor-8301-thread-1) [
x:collection1] o.a.s.c.SolrResourceLoader [null] Added 2 libs to classloader, 
from paths: 
[/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/test-files/solr/collection1/lib,
 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/test-files/solr/collection1/lib/classes]
   [junit4]   2> 2508409 INFO  (coreLoadExecutor-8301-thread-1) [
x:collection1] o.a.s.c.SolrConfig Using Lucene MatchVersion: 8.0.0
   [junit4]   2> 2508415 INFO  (coreLoadExecutor-8301-thread-1) [
x:collection1] o.a.s.s.IndexSchema [collection1] Schema name=test
   [junit4]   2> 2508466 INFO  (coreLoadExecutor-8301-thread-1) [
x:collection1] o.a.s.s.IndexSchema Loaded schema test/1.6 

Montreal

2018-03-29 Thread Alexandre Rafalovitch
To any Lucene/Solr committers, I just wanted to mention that I am currently
living in Montreal.

So if this suddenly becomes relevant to you, I will be happy to help with
advice or logistics...

Regards,
   Alex


[JENKINS] Lucene-Solr-repro - Build # 382 - Still unstable

2018-03-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/382/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1515/consoleText

[repro] Revision: 060d82af316bd2512e3f014365ce82db6c2f9fdf

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=TestReplicationHandler 
-Dtests.seed=D1979BF5E818AF83 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-KW -Dtests.timezone=America/Montserrat -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=checkCollectionParameters -Dtests.seed=604C31EE0DEE4F34 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=de-CH -Dtests.timezone=Asia/Rangoon -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
701af06f627be98ddc8db083dc4dd51dbfe4936a
[repro] git fetch
[repro] git checkout 060d82af316bd2512e3f014365ce82db6c2f9fdf

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestReplicationHandler
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 3296 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestReplicationHandler" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=D1979BF5E818AF83 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-KW -Dtests.timezone=America/Montserrat -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 262684 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 447 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=604C31EE0DEE4F34 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=de-CH -Dtests.timezone=Asia/Rangoon -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 138 lines...]
[repro] Failures:
[repro]   0/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro]   1/5 failed: org.apache.solr.handler.TestReplicationHandler
[repro] git checkout 701af06f627be98ddc8db083dc4dd51dbfe4936a

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12065) Restore replica always in buffering state

2018-03-29 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419749#comment-16419749
 ] 

Mikhail Khludnev commented on SOLR-12065:
-

The patch seems great. Thanks, [~rohitcse]. [~varunthacker], would you mind to 
skim through? If you approve, I'll proceed with commit.   

> Restore replica always in buffering state
> -
>
> Key: SOLR-12065
> URL: https://issues.apache.org/jira/browse/SOLR-12065
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
> Attachments: 12065.patch, 12605UTLogs.txt.zip, logs_and_metrics.zip, 
> restore_snippet.log
>
>
> Steps to reproduce:
>  
>  - 
> [http://localhost:8983/solr/admin/collections?action=CREATE=test_backup=1=1]
>  - curl [http://127.0.0.1:8983/solr/test_backup/update?commit=true] -H 
> 'Content-type:application/json' -d '
>  [ \{"id" : "1"}
> ]' 
>  - 
> [http://localhost:8983/solr/admin/collections?action=BACKUP=test_backup=test_backup=/Users/varunthacker/backups]
>  - 
> [http://localhost:8983/solr/admin/collections?action=RESTORE=test_backup=/Users/varunthacker/backups=test_restore]
>  * curl [http://127.0.0.1:8983/solr/test_restore/update?commit=true] -H 
> 'Content-type:application/json' -d '
>  [
> {"id" : "2"}
> ]'
>  * Snippet when you try adding a document
> {code:java}
> INFO - 2018-03-07 22:48:11.555; [c:test_restore s:shard1 r:core_node22 
> x:test_restore_shard1_replica_n21] 
> org.apache.solr.update.processor.DistributedUpdateProcessor; Ignoring commit 
> while not ACTIVE - state: BUFFERING replay: false
> INFO - 2018-03-07 22:48:11.556; [c:test_restore s:shard1 r:core_node22 
> x:test_restore_shard1_replica_n21] 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor;
>  [test_restore_shard1_replica_n21] webapp=/solr path=/update 
> params={commit=true}{add=[2 (1594320896973078528)],commit=} 0 4{code}
>  * If you see "TLOG.state" from [http://localhost:8983/solr/admin/metrics] 
> it's always 1 (BUFFERING)
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12136) Document hl.q parameter

2018-03-29 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419729#comment-16419729
 ] 

David Smiley commented on SOLR-12136:
-

Thanks [~ctargett]!  I ran "ant precommit" and thought that was going to find 
linking issues (thanks to recent work by Hoss?) but I guess not in this case?  
It's a shame it appears we're forced to link to an anchor.

> Document hl.q parameter
> ---
>
> Key: SOLR-12136
> URL: https://issues.apache.org/jira/browse/SOLR-12136
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12136.patch
>
>
> *Original issue:
> If I specify:
> hl.fl=f1=something
> then "something" is analyzed against the default field rather than f1
> So in this particular case, f1 did some diacritic folding
> (GermanNormalizationFilterFactory specifically). But my guess is that
> the df was still "text", or at least something that didn't reference
> that filter.
> I'm defining "worked" in what follows is getting highlighting on "Kündigung"
> so
> Kündigung was indexed as Kundigung
> So far so good. Now if I try to highlight on f1
> These work
> q=f1:Kündigung=f1
> q=f1:Kündigung=f1=Kundigung <= NOTE, without umlaut
> q=f1:Kündigung=f1=f1:Kündigung <= NOTE, with umlaut
> This does not work
> q=f1:Kündigung=f1=Kündigung <= NOTE, with umlaut
> Testing this locally, I'd get the highlighting if I defined df as "f1"
> in all the above cases.
> **David Smiley's analysis
> BTW hl.q is parsed by the hl.qparser param which defaults to the defType 
> param which defaults to "lucene".
> In common cases, I think this is a non-issue.  One common case is 
> defType=edismax and you specify a list of fields in 'qf' (thus your query has 
> parts parsed on various fields) and then you set hl.fl to some subset of 
> those fields.  This will use the correct analysis.
> You make a compelling point in terms of what a user might expect -- my gut 
> reaction aligned with your expectation and I thought maybe we should change 
> this.  But it's not as easy at it seems at first blush, and there are bad 
> performance implications.  How do you *generically* tell an arbitrary query 
> parser which field it should parse the string with?  We have no such 
> standard.  And lets say we did; then we'd have to re-parse the query string 
> for each field in hl.fl (and consider hl.fl might be a wildcard!).  Perhaps 
> both solveable or constrainable with yet more parameters, but I'm pessimistic 
> it'll be a better outcome.
> The documentation ought to clarify this matter.  Probably in hl.fl to say 
> that the fields listed are analyzed with that of their field type, and that 
> it ought to be "compatible" (the same or similar) to that which parsed the 
> query.
> Perhaps, like spellcheck's spellcheck.collateParam.* param prefix, 
> highlighting could add a means to specify additional parameters for hl.q to 
> be parsed (not just the choice of query parsers).  This isn't particularly 
> pressing though since this can easily be added to the front of hl.q like 
> hl.q={!edismax qf=$hl.fl v=$q}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12036) factor out DefaultStreamFactory class

2018-03-29 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419705#comment-16419705
 ] 

Christine Poerschke commented on SOLR-12036:


bq. ... reduce the need for {{withFunctionName}} method calls in client code.

https://github.com/deeplearning4j/deeplearning4j/pull/4876 illustrates how 
client code could have its own local {{DefaultStreamFactory}} and how it would 
be more convenient though for solrj to provide one e.g. a supplied default 
factory would automatically contain new streams as and when they are added.

> factor out DefaultStreamFactory class
> -
>
> Key: SOLR-12036
> URL: https://issues.apache.org/jira/browse/SOLR-12036
> Project: Solr
>  Issue Type: Task
>  Components: streaming expressions
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12036.patch
>
>
> Motivation for the proposed class is to reduce the need for 
> {{withFunctionName}} method calls in client code.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11838) explore supporting Deeplearning4j NeuralNetwork models

2018-03-29 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419681#comment-16419681
 ] 

Christine Poerschke commented on SOLR-11838:


I've been playing around some more with deeplearning4j and Solr streaming 
expressions ...

new pull request 
[deeplearning4j/pull/4876|https://github.com/deeplearning4j/deeplearning4j/pull/4876]
 shares the results, proposing to add a {{DataSetIterator}} implementation 
(tentatively named {{TupleStreamDataSetIterator}}) which uses a [streaming 
expression|https://lucene.apache.org/solr/guide/7_2/streaming-expressions.html] 
to fetch data from Solr and/or one of the sources (e.g. {{jdbc}}) supported as 
a [stream 
source|https://lucene.apache.org/solr/guide/7_2/stream-source-reference.html].

... this is not specific to Learning-To-Rank and I have no specific real use 
case in mind as yet but would be curious to hear if anyone could think of 
scenarios where something other than fields are part of the streaming 
expression used in the iterator?

> explore supporting Deeplearning4j NeuralNetwork models
> --
>
> Key: SOLR-11838
> URL: https://issues.apache.org/jira/browse/SOLR-11838
> Project: Solr
>  Issue Type: New Feature
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-11838.patch, SOLR-11838.patch
>
>
> [~yuyano] wrote in SOLR-11597:
> bq. ... If we think to apply this to more complex neural networks in the 
> future, we will need to support layers ...
> [~malcorn_redhat] wrote in SOLR-11597:
> bq. ... In my opinion, if this is a route Solr eventually wants to go, I 
> think a better strategy would be to just add a dependency on 
> [Deeplearning4j|https://deeplearning4j.org/] ...
> Creating this ticket for the idea to be explored further (if anyone is 
> interested in exploring it), complimentary to and independent of the 
> SOLR-11597 RankNet related effort.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12163) Ref Guide: Improve Setting Up an External ZK Ensemble page

2018-03-29 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419678#comment-16419678
 ] 

Cassandra Targett commented on SOLR-12163:
--

bq. one thing I wanted to add was making sure users enable GC logging, proper 
log rotation and heap settings for their zookeeper installation

Ah, great point. We currently say nothing that I can find anywhere in the guide 
about proper production operationalization of ZK, and we probably should (if 
not on this page, then in Taking Solr to Production for sure).

> Ref Guide: Improve Setting Up an External ZK Ensemble page
> --
>
> Key: SOLR-12163
> URL: https://issues.apache.org/jira/browse/SOLR-12163
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4
>
> Attachments: setting-up-an-external-zookeeper-ensemble.adoc
>
>
> I had to set up a ZK ensemble the other day for the first time in a while, 
> and thought I'd test our docs on the subject while I was at it. I headed over 
> to 
> https://lucene.apache.org/solr/guide/setting-up-an-external-zookeeper-ensemble.html,
>  and...Well, I still haven't gotten back to what I was trying to do, but I 
> rewrote the entire page.
> The problem to me is that the page today is mostly a stripped down copy of 
> the ZK Getting Started docs: walking through setting up a single ZK instance 
> before introducing the idea of an ensemble and going back through the same 
> configs again to update them for the ensemble.
> IOW, despite the page being titled "setting up an ensemble", it's mostly 
> about not setting up an ensemble. That's at the end of the page, which itself 
> focuses a bit heavily on the use case of running an ensemble on a single 
> server (so, if you're counting...that's 3 use cases we don't want people to 
> use discussed in detail on a page that's supposedly about _not_ doing any of 
> those things).
> So, I took all of it and restructured the whole thing to focus primarily on 
> the use case we want people to use: running 3 ZK nodes on different machines. 
> Running 3 on one machine is still there, but noted in passing with the 
> appropriate caveats. I've also added information about choosing to use a 
> chroot, which AFAICT was only covered in the section on Taking Solr to 
> Production.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12164) Ref Guide: Redesign HTML version landing page

2018-03-29 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419660#comment-16419660
 ] 

Cassandra Targett edited comment on SOLR-12164 at 3/29/18 8:03 PM:
---

Here's a few screenshots of what I had in mind. The "NewLandingPageTop.png", 
"-Mid" and "-Bottom" show most of what the new main page of the Ref Guide will 
look like. The meat of it is in the top, though, really.

* Removed page title across the top and replaced it with a "jumbotron" - 
basically a bunch of pre-defined JS that comes with Bootstrap and makes a big 
box on the page.
** The "copy" (the text) in the box needs a bit of work maybe?
** Change the link to download into a button (this goes to the mirror for the 
latest, but could be set to go to archives for the version of the Guide the 
person is happens to be on...I prefer the mirror, but am open to other ideas).
* Split the wall of text into about 6 boxes, organizing the main section 
headings into higher-level groupings.
** Left nav is still there for anyone who prefers to see the main sections in 
their natural hierarchical order.
** These could use a bit of review also - totally open to the idea there could 
be better titles for the boxes, and I know we need better "copy" for each item 
in the box.

In the PDF, the jumbotron thing does not come into play because that's an HTML 
thing, but those boxes are still used. Since this comes after a 4-page table of 
contents, repeating the main headings doesn't seem to do a ton of service, 
really. The "PDF-intro.png" shows how this would look.

An extension of the idea behind this layout is that it could be extended to 
some (maybe most, or even all) of the pages that head up each main section as a 
better way to introduce the topics in those sections, but I'm not promising 
that as part of this issue.

Thoughts? Opinions?


was (Author: ctargett):
Here's a few screenshots of what I had in mind. The "NewLandingPageTop.png", 
"-Mid" and "-Bottom" show most of what the new main page of the Ref Guide will 
look like. The meat of it is in the top, though, really.

* Removed page and replaced it with a "jumbotron" - basically a bunch of 
pre-defined JS that comes with Bootstrap and makes a big box on the page.
** The "copy" (the text) in the box needs a bit of work maybe?
** Change the link to download into a button (this goes to the mirror for the 
latest, but could be set to go to archives for the version of the Guide the 
person is happens to be on...I prefer the mirror, but am open to other ideas).
* Split the wall of text into about 6 boxes, organizing the main section 
headings into higher-level groupings.
** Left nav is still there for anyone who prefers to see the main sections in 
their natural hierarchical order.
** These could use a bit of review also - totally open to the idea there could 
be better titles for the boxes, and I know we need better "copy" for each item 
in the box.

In the PDF, the jumbotron thing does not come into play because that's an HTML 
thing, but those boxes are still used. Since this comes after a 4-page table of 
contents, repeating the main headings doesn't seem to do a ton of service, 
really. The "PDF-intro.png" shows how this would look.

An extension of the idea behind this layout is that it could be extended to 
some (maybe most, or even all) of the pages that head up each main section as a 
better way to introduce the topics in those sections, but I'm not promising 
that as part of this issue.

Thoughts? Opinions?

> Ref Guide: Redesign HTML version landing page
> -
>
> Key: SOLR-12164
> URL: https://issues.apache.org/jira/browse/SOLR-12164
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4
>
> Attachments: NewLandingPageBottom.png, NewLandingPageMid.png, 
> NewLandingPageTop.png, PDF-intro.png
>
>
> We've had the same first page of the Ref Guide for a long time, and it's 
> probably fine as far as it goes, but that isn't very far. It's effectively a 
> wall of text. 
> Since we've got the ability to work with an online presentation, and we have 
> some tools available already in use (BootstrapJS, etc.), we can do some new 
> things.
> I've got a couple ideas I was playing with a few months ago. I'll dust those 
> off and attach some screenshots here + a patch or two. These will, of course, 
> work for the PDF so I'll include something to show that too (it can also be 
> snazzier).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, 

[jira] [Commented] (SOLR-12163) Ref Guide: Improve Setting Up an External ZK Ensemble page

2018-03-29 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419663#comment-16419663
 ] 

Varun Thacker commented on SOLR-12163:
--

This looks great!

I need to read it more thoroughly but one thing I wanted to add was making sure 
users enable GC logging, proper log rotation and heap settings for their 
zookeeper installation.  This can greatly help in debugging the root cause when 
dealing with cluster issues.

> Ref Guide: Improve Setting Up an External ZK Ensemble page
> --
>
> Key: SOLR-12163
> URL: https://issues.apache.org/jira/browse/SOLR-12163
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4
>
> Attachments: setting-up-an-external-zookeeper-ensemble.adoc
>
>
> I had to set up a ZK ensemble the other day for the first time in a while, 
> and thought I'd test our docs on the subject while I was at it. I headed over 
> to 
> https://lucene.apache.org/solr/guide/setting-up-an-external-zookeeper-ensemble.html,
>  and...Well, I still haven't gotten back to what I was trying to do, but I 
> rewrote the entire page.
> The problem to me is that the page today is mostly a stripped down copy of 
> the ZK Getting Started docs: walking through setting up a single ZK instance 
> before introducing the idea of an ensemble and going back through the same 
> configs again to update them for the ensemble.
> IOW, despite the page being titled "setting up an ensemble", it's mostly 
> about not setting up an ensemble. That's at the end of the page, which itself 
> focuses a bit heavily on the use case of running an ensemble on a single 
> server (so, if you're counting...that's 3 use cases we don't want people to 
> use discussed in detail on a page that's supposedly about _not_ doing any of 
> those things).
> So, I took all of it and restructured the whole thing to focus primarily on 
> the use case we want people to use: running 3 ZK nodes on different machines. 
> Running 3 on one machine is still there, but noted in passing with the 
> appropriate caveats. I've also added information about choosing to use a 
> chroot, which AFAICT was only covered in the section on Taking Solr to 
> Production.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-12164) Ref Guide: Redesign HTML version landing page

2018-03-29 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12164:
-
Comment: was deleted

(was: This looks great!

 

I need to read it more thoroughly but one thing I wanted to add was making sure 
users enable GC logging, proper log rotation and heap settings for their 
zookeeper installation.  This can greatly help in debugging the root cause when 
dealing with cluster issues.)

> Ref Guide: Redesign HTML version landing page
> -
>
> Key: SOLR-12164
> URL: https://issues.apache.org/jira/browse/SOLR-12164
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4
>
> Attachments: NewLandingPageBottom.png, NewLandingPageMid.png, 
> NewLandingPageTop.png, PDF-intro.png
>
>
> We've had the same first page of the Ref Guide for a long time, and it's 
> probably fine as far as it goes, but that isn't very far. It's effectively a 
> wall of text. 
> Since we've got the ability to work with an online presentation, and we have 
> some tools available already in use (BootstrapJS, etc.), we can do some new 
> things.
> I've got a couple ideas I was playing with a few months ago. I'll dust those 
> off and attach some screenshots here + a patch or two. These will, of course, 
> work for the PDF so I'll include something to show that too (it can also be 
> snazzier).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12164) Ref Guide: Redesign HTML version landing page

2018-03-29 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12164:
-
Attachment: PDF-intro.png
NewLandingPageTop.png
NewLandingPageMid.png
NewLandingPageBottom.png

> Ref Guide: Redesign HTML version landing page
> -
>
> Key: SOLR-12164
> URL: https://issues.apache.org/jira/browse/SOLR-12164
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4
>
> Attachments: NewLandingPageBottom.png, NewLandingPageMid.png, 
> NewLandingPageTop.png, PDF-intro.png
>
>
> We've had the same first page of the Ref Guide for a long time, and it's 
> probably fine as far as it goes, but that isn't very far. It's effectively a 
> wall of text. 
> Since we've got the ability to work with an online presentation, and we have 
> some tools available already in use (BootstrapJS, etc.), we can do some new 
> things.
> I've got a couple ideas I was playing with a few months ago. I'll dust those 
> off and attach some screenshots here + a patch or two. These will, of course, 
> work for the PDF so I'll include something to show that too (it can also be 
> snazzier).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12164) Ref Guide: Redesign HTML version landing page

2018-03-29 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419660#comment-16419660
 ] 

Cassandra Targett commented on SOLR-12164:
--

Here's a few screenshots of what I had in mind. The "NewLandingPageTop.png", 
"-Mid" and "-Bottom" show most of what the new main page of the Ref Guide will 
look like. The meat of it is in the top, though, really.

* Removed page and replaced it with a "jumbotron" - basically a bunch of 
pre-defined JS that comes with Bootstrap and makes a big box on the page.
** The "copy" (the text) in the box needs a bit of work maybe?
** Change the link to download into a button (this goes to the mirror for the 
latest, but could be set to go to archives for the version of the Guide the 
person is happens to be on...I prefer the mirror, but am open to other ideas).
* Split the wall of text into about 6 boxes, organizing the main section 
headings into higher-level groupings.
** Left nav is still there for anyone who prefers to see the main sections in 
their natural hierarchical order.
** These could use a bit of review also - totally open to the idea there could 
be better titles for the boxes, and I know we need better "copy" for each item 
in the box.

In the PDF, the jumbotron thing does not come into play because that's an HTML 
thing, but those boxes are still used. Since this comes after a 4-page table of 
contents, repeating the main headings doesn't seem to do a ton of service, 
really. The "PDF-intro.png" shows how this would look.

An extension of the idea behind this layout is that it could be extended to 
some (maybe most, or even all) of the pages that head up each main section as a 
better way to introduce the topics in those sections, but I'm not promising 
that as part of this issue.

Thoughts? Opinions?

> Ref Guide: Redesign HTML version landing page
> -
>
> Key: SOLR-12164
> URL: https://issues.apache.org/jira/browse/SOLR-12164
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4
>
>
> We've had the same first page of the Ref Guide for a long time, and it's 
> probably fine as far as it goes, but that isn't very far. It's effectively a 
> wall of text. 
> Since we've got the ability to work with an online presentation, and we have 
> some tools available already in use (BootstrapJS, etc.), we can do some new 
> things.
> I've got a couple ideas I was playing with a few months ago. I'll dust those 
> off and attach some screenshots here + a patch or two. These will, of course, 
> work for the PDF so I'll include something to show that too (it can also be 
> snazzier).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12164) Ref Guide: Redesign HTML version landing page

2018-03-29 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419648#comment-16419648
 ] 

Varun Thacker commented on SOLR-12164:
--

This looks great!

 

I need to read it more thoroughly but one thing I wanted to add was making sure 
users enable GC logging, proper log rotation and heap settings for their 
zookeeper installation.  This can greatly help in debugging the root cause when 
dealing with cluster issues.

> Ref Guide: Redesign HTML version landing page
> -
>
> Key: SOLR-12164
> URL: https://issues.apache.org/jira/browse/SOLR-12164
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4
>
>
> We've had the same first page of the Ref Guide for a long time, and it's 
> probably fine as far as it goes, but that isn't very far. It's effectively a 
> wall of text. 
> Since we've got the ability to work with an online presentation, and we have 
> some tools available already in use (BootstrapJS, etc.), we can do some new 
> things.
> I've got a couple ideas I was playing with a few months ago. I'll dust those 
> off and attach some screenshots here + a patch or two. These will, of course, 
> work for the PDF so I'll include something to show that too (it can also be 
> snazzier).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12136) Document hl.q parameter

2018-03-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419625#comment-16419625
 ] 

ASF subversion and git services commented on SOLR-12136:


Commit c4258531e619c7ccaf66d90ec8972f7733f25446 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c425853 ]

SOLR-12136: fix bad links breaking the build


> Document hl.q parameter
> ---
>
> Key: SOLR-12136
> URL: https://issues.apache.org/jira/browse/SOLR-12136
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12136.patch
>
>
> *Original issue:
> If I specify:
> hl.fl=f1=something
> then "something" is analyzed against the default field rather than f1
> So in this particular case, f1 did some diacritic folding
> (GermanNormalizationFilterFactory specifically). But my guess is that
> the df was still "text", or at least something that didn't reference
> that filter.
> I'm defining "worked" in what follows is getting highlighting on "Kündigung"
> so
> Kündigung was indexed as Kundigung
> So far so good. Now if I try to highlight on f1
> These work
> q=f1:Kündigung=f1
> q=f1:Kündigung=f1=Kundigung <= NOTE, without umlaut
> q=f1:Kündigung=f1=f1:Kündigung <= NOTE, with umlaut
> This does not work
> q=f1:Kündigung=f1=Kündigung <= NOTE, with umlaut
> Testing this locally, I'd get the highlighting if I defined df as "f1"
> in all the above cases.
> **David Smiley's analysis
> BTW hl.q is parsed by the hl.qparser param which defaults to the defType 
> param which defaults to "lucene".
> In common cases, I think this is a non-issue.  One common case is 
> defType=edismax and you specify a list of fields in 'qf' (thus your query has 
> parts parsed on various fields) and then you set hl.fl to some subset of 
> those fields.  This will use the correct analysis.
> You make a compelling point in terms of what a user might expect -- my gut 
> reaction aligned with your expectation and I thought maybe we should change 
> this.  But it's not as easy at it seems at first blush, and there are bad 
> performance implications.  How do you *generically* tell an arbitrary query 
> parser which field it should parse the string with?  We have no such 
> standard.  And lets say we did; then we'd have to re-parse the query string 
> for each field in hl.fl (and consider hl.fl might be a wildcard!).  Perhaps 
> both solveable or constrainable with yet more parameters, but I'm pessimistic 
> it'll be a better outcome.
> The documentation ought to clarify this matter.  Probably in hl.fl to say 
> that the fields listed are analyzed with that of their field type, and that 
> it ought to be "compatible" (the same or similar) to that which parsed the 
> query.
> Perhaps, like spellcheck's spellcheck.collateParam.* param prefix, 
> highlighting could add a means to specify additional parameters for hl.q to 
> be parsed (not just the choice of query parsers).  This isn't particularly 
> pressing though since this can easily be added to the front of hl.q like 
> hl.q={!edismax qf=$hl.fl v=$q}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12136) Document hl.q parameter

2018-03-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419623#comment-16419623
 ] 

ASF subversion and git services commented on SOLR-12136:


Commit b5a36785738a299cb00933c2d55c587917a2d9ab in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b5a3678 ]

SOLR-12136: fix bad links breaking the build


> Document hl.q parameter
> ---
>
> Key: SOLR-12136
> URL: https://issues.apache.org/jira/browse/SOLR-12136
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12136.patch
>
>
> *Original issue:
> If I specify:
> hl.fl=f1=something
> then "something" is analyzed against the default field rather than f1
> So in this particular case, f1 did some diacritic folding
> (GermanNormalizationFilterFactory specifically). But my guess is that
> the df was still "text", or at least something that didn't reference
> that filter.
> I'm defining "worked" in what follows is getting highlighting on "Kündigung"
> so
> Kündigung was indexed as Kundigung
> So far so good. Now if I try to highlight on f1
> These work
> q=f1:Kündigung=f1
> q=f1:Kündigung=f1=Kundigung <= NOTE, without umlaut
> q=f1:Kündigung=f1=f1:Kündigung <= NOTE, with umlaut
> This does not work
> q=f1:Kündigung=f1=Kündigung <= NOTE, with umlaut
> Testing this locally, I'd get the highlighting if I defined df as "f1"
> in all the above cases.
> **David Smiley's analysis
> BTW hl.q is parsed by the hl.qparser param which defaults to the defType 
> param which defaults to "lucene".
> In common cases, I think this is a non-issue.  One common case is 
> defType=edismax and you specify a list of fields in 'qf' (thus your query has 
> parts parsed on various fields) and then you set hl.fl to some subset of 
> those fields.  This will use the correct analysis.
> You make a compelling point in terms of what a user might expect -- my gut 
> reaction aligned with your expectation and I thought maybe we should change 
> this.  But it's not as easy at it seems at first blush, and there are bad 
> performance implications.  How do you *generically* tell an arbitrary query 
> parser which field it should parse the string with?  We have no such 
> standard.  And lets say we did; then we'd have to re-parse the query string 
> for each field in hl.fl (and consider hl.fl might be a wildcard!).  Perhaps 
> both solveable or constrainable with yet more parameters, but I'm pessimistic 
> it'll be a better outcome.
> The documentation ought to clarify this matter.  Probably in hl.fl to say 
> that the fields listed are analyzed with that of their field type, and that 
> it ought to be "compatible" (the same or similar) to that which parsed the 
> query.
> Perhaps, like spellcheck's spellcheck.collateParam.* param prefix, 
> highlighting could add a means to specify additional parameters for hl.q to 
> be parsed (not just the choice of query parsers).  This isn't particularly 
> pressing though since this can easily be added to the front of hl.q like 
> hl.q={!edismax qf=$hl.fl v=$q}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Solr-reference-guide-master - Build # 6382 - Still Failing

2018-03-29 Thread Cassandra Targett
I'll fix this.

On Thu, Mar 29, 2018 at 2:05 PM, Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Build: https://builds.apache.org/job/Solr-reference-guide-master/6382/
>
> Log:
> Started by timer
> [EnvInject] - Loading node environment variables.
> Building remotely on websites1 (git-websites) in workspace
> /home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
>  > git rev-parse --is-inside-work-tree # timeout=10
> Fetching changes from the remote Git repository
>  > git config remote.origin.url git://git.apache.org/lucene-solr.git #
> timeout=10
> Cleaning workspace
>  > git rev-parse --verify HEAD # timeout=10
> Resetting working tree
>  > git reset --hard # timeout=10
>  > git clean -fdx # timeout=10
> Fetching upstream changes from git://git.apache.org/lucene-solr.git
>  > git --version # timeout=10
>  > git fetch --tags --progress git://git.apache.org/lucene-solr.git
> +refs/heads/*:refs/remotes/origin/*
>  > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
>  > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
> Checking out Revision 1ce72537b8b7577657c275dd7a6bfbb081392575
> (refs/remotes/origin/master)
>  > git config core.sparsecheckout # timeout=10
>  > git checkout -f 1ce72537b8b7577657c275dd7a6bfbb081392575
> Commit message: "LUCENE-8106: add missing import"
>  > git rev-list --no-walk 1ce72537b8b7577657c275dd7a6bfbb081392575 #
> timeout=10
> No emails were triggered.
> [Solr-reference-guide-master] $ /usr/bin/env bash /tmp/
> jenkins9004906743541811525.sh
> + set -e
> + RVM_PATH=/home/jenkins/.rvm
> + RUBY_VERSION=ruby-2.3.3
> + GEMSET=solr-refguide-gemset
> + curl -sSL https://get.rvm.io
> + bash -s -- --ignore-dotfiles stable
> Turning on ignore dotfiles mode.
> Downloading https://github.com/rvm/rvm/archive/1.29.3.tar.gz
> Downloading https://github.com/rvm/rvm/releases/download/1.29.3/1.29.
> 3.tar.gz.asc
> gpg: Signature made Sun 10 Sep 2017 08:59:21 PM UTC using RSA key ID
> BF04FF17
> gpg: Good signature from "Michal Papis (RVM signing) "
> gpg: aka "Michal Papis "
> gpg: aka "[jpeg image of size 5015]"
> gpg: WARNING: This key is not certified with a trusted signature!
> gpg:  There is no indication that the signature belongs to the
> owner.
> Primary key fingerprint: 409B 6B17 96C2 7546 2A17  0311 3804 BB82 D39D C0E3
>  Subkey fingerprint: 62C9 E5F4 DA30 0D94 AC36  166B E206 C29F BF04 FF17
> GPG verified '/home/jenkins/.rvm/archives/rvm-1.29.3.tgz'
>
> Upgrading the RVM installation in /home/jenkins/.rvm/
> Upgrade of RVM in /home/jenkins/.rvm/ is complete.
>
> Upgrade Notes:
>
>   * No new notes to display.
>
> + set +x
> Running 'source /home/jenkins/.rvm/scripts/rvm'
> Running 'rvm autolibs disable'
> Running 'rvm install ruby-2.3.3'
> Already installed ruby-2.3.3.
> To reinstall use:
>
> rvm reinstall ruby-2.3.3
>
> Running 'rvm gemset create solr-refguide-gemset'
> ruby-2.3.3 - #gemset created /home/jenkins/.rvm/gems/ruby-
> 2.3.3@solr-refguide-gemset
>  [32mruby-2.3.3 - #generating solr-refguide-gemset wrappers [0m| / - \ | /
> - \ | .- \ | / - \ | / - .| / - \ | / - \ | .- \ | / - \ | / - .| / - \ | /
> - \ | .- \ | / - \ | / - .| / - \ | / - \ | .- \ | / - .
> Running 'rvm ruby-2.3.3@solr-refguide-gemset'
> Using /home/jenkins/.rvm/gems/ruby-2.3.3 with gemset solr-refguide-gemset
> Running 'gem install --force --version 3.5.0 jekyll'
> Successfully installed jekyll-3.5.0
> Parsing documentation for jekyll-3.5.0
> Done installing documentation for jekyll after 1 seconds
> 1 gem installed
> Running 'gem install --force --version 2.1.0 jekyll-asciidoc'
> Successfully installed jekyll-asciidoc-2.1.0
> Parsing documentation for jekyll-asciidoc-2.1.0
> Done installing documentation for jekyll-asciidoc after 0 seconds
> 1 gem installed
> Running 'gem install --force --version 1.1.2 pygments.rb'
> Successfully installed pygments.rb-1.1.2
> Parsing documentation for pygments.rb-1.1.2
> Done installing documentation for pygments.rb after 0 seconds
> 1 gem installed
> Running 'ant ivy-bootstrap'
> Buildfile: /home/jenkins/jenkins-slave/workspace/Solr-reference-
> guide-master/build.xml
>
> -ivy-bootstrap1:
>  [echo] installing ivy 2.4.0 to /home/jenkins/.ant/lib
>   [get] Getting: http://repo1.maven.org/maven2/
> org/apache/ivy/ivy/2.4.0/ivy-2.4.0.jar
>   [get] To: /home/jenkins/.ant/lib/ivy-2.4.0.jar
>   [get] Not modified - so not downloaded
>
> -ivy-bootstrap2:
>
> -ivy-checksum:
>
> -ivy-remove-old-versions:
>
> ivy-bootstrap:
>
> BUILD SUCCESSFUL
> Total time: 0 seconds
> + ant clean build-site build-pdf
> Buildfile: /home/jenkins/jenkins-slave/workspace/Solr-reference-
> guide-master/solr/solr-ref-guide/build.xml
>
> clean:
>
> build-init:
> [mkdir] Created dir: /home/jenkins/jenkins-slave/
> workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
>  [echo] Copying all non template files from 

[jira] [Created] (SOLR-12164) Ref Guide: Redesign HTML version landing page

2018-03-29 Thread Cassandra Targett (JIRA)
Cassandra Targett created SOLR-12164:


 Summary: Ref Guide: Redesign HTML version landing page
 Key: SOLR-12164
 URL: https://issues.apache.org/jira/browse/SOLR-12164
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Cassandra Targett
Assignee: Cassandra Targett
 Fix For: 7.4


We've had the same first page of the Ref Guide for a long time, and it's 
probably fine as far as it goes, but that isn't very far. It's effectively a 
wall of text. 

Since we've got the ability to work with an online presentation, and we have 
some tools available already in use (BootstrapJS, etc.), we can do some new 
things.

I've got a couple ideas I was playing with a few months ago. I'll dust those 
off and attach some screenshots here + a patch or two. These will, of course, 
work for the PDF so I'll include something to show that too (it can also be 
snazzier).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4535 - Still Unstable!

2018-03-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4535/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestPullReplica.testKillPullReplica

Error Message:
Replica core_node4 not up to date after 10 seconds expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: Replica core_node4 not up to date after 10 seconds 
expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([FE3409DEEAFCF27F:72C5154B4A4D1347]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:538)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:529)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllActiveReplicas(TestPullReplica.java:525)
at 
org.apache.solr.cloud.TestPullReplica.testKillPullReplica(TestPullReplica.java:502)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Solr-reference-guide-master - Build # 6382 - Still Failing

2018-03-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/6382/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on websites1 (git-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 1ce72537b8b7577657c275dd7a6bfbb081392575 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1ce72537b8b7577657c275dd7a6bfbb081392575
Commit message: "LUCENE-8106: add missing import"
 > git rev-list --no-walk 1ce72537b8b7577657c275dd7a6bfbb081392575 # timeout=10
No emails were triggered.
[Solr-reference-guide-master] $ /usr/bin/env bash 
/tmp/jenkins9004906743541811525.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.3.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.3/1.29.3.tar.gz.asc
gpg: Signature made Sun 10 Sep 2017 08:59:21 PM UTC using RSA key ID BF04FF17
gpg: Good signature from "Michal Papis (RVM signing) "
gpg: aka "Michal Papis "
gpg: aka "[jpeg image of size 5015]"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:  There is no indication that the signature belongs to the owner.
Primary key fingerprint: 409B 6B17 96C2 7546 2A17  0311 3804 BB82 D39D C0E3
 Subkey fingerprint: 62C9 E5F4 DA30 0D94 AC36  166B E206 C29F BF04 FF17
GPG verified '/home/jenkins/.rvm/archives/rvm-1.29.3.tgz'

Upgrading the RVM installation in /home/jenkins/.rvm/
Upgrade of RVM in /home/jenkins/.rvm/ is complete.

Upgrade Notes:

  * No new notes to display.

+ set +x
Running 'source /home/jenkins/.rvm/scripts/rvm'
Running 'rvm autolibs disable'
Running 'rvm install ruby-2.3.3'
Already installed ruby-2.3.3.
To reinstall use:

rvm reinstall ruby-2.3.3

Running 'rvm gemset create solr-refguide-gemset'
ruby-2.3.3 - #gemset created 
/home/jenkins/.rvm/gems/ruby-2.3.3@solr-refguide-gemset
ruby-2.3.3 - #generating solr-refguide-gemset 
wrappers|/-\|/-\|.-\|/-\|/-.|/-\|/-\|.-\|/-\|/-.|/-\|/-\|.-\|/-\|/-.|/-\|/-\|.-\|/-.
Running 'rvm ruby-2.3.3@solr-refguide-gemset'
Using /home/jenkins/.rvm/gems/ruby-2.3.3 with gemset solr-refguide-gemset
Running 'gem install --force --version 3.5.0 jekyll'
Successfully installed jekyll-3.5.0
Parsing documentation for jekyll-3.5.0
Done installing documentation for jekyll after 1 seconds
1 gem installed
Running 'gem install --force --version 2.1.0 jekyll-asciidoc'
Successfully installed jekyll-asciidoc-2.1.0
Parsing documentation for jekyll-asciidoc-2.1.0
Done installing documentation for jekyll-asciidoc after 0 seconds
1 gem installed
Running 'gem install --force --version 1.1.2 pygments.rb'
Successfully installed pygments.rb-1.1.2
Parsing documentation for pygments.rb-1.1.2
Done installing documentation for pygments.rb after 0 seconds
1 gem installed
Running 'ant ivy-bootstrap'
Buildfile: 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/build.xml

-ivy-bootstrap1:
 [echo] installing ivy 2.4.0 to /home/jenkins/.ant/lib
  [get] Getting: 
http://repo1.maven.org/maven2/org/apache/ivy/ivy/2.4.0/ivy-2.4.0.jar
  [get] To: /home/jenkins/.ant/lib/ivy-2.4.0.jar
  [get] Not modified - so not downloaded

-ivy-bootstrap2:

-ivy-checksum:

-ivy-remove-old-versions:

ivy-bootstrap:

BUILD SUCCESSFUL
Total time: 0 seconds
+ ant clean build-site build-pdf
Buildfile: 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/solr-ref-guide/build.xml

clean:

build-init:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
 [echo] Copying all non template files from src ...
 [copy] Copying 410 files to 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
 [echo] Copy (w/prop replacement) any template files from src...
 [copy] Copying 1 file to 

[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-03-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419594#comment-16419594
 ] 

Jan Høydahl commented on SOLR-7896:
---

Let’s keep this issue for adding a login screen and handling initial 
authentication if such a plugin is enabled in Solr. I agree Aaron that next 
step could be to simplify initial bootstrap of authentication, but we have 
already a solution for that with a simple {{bin/solr auth}} command. But feel 
free to open another Jira about Admin UI support for enabling and managing 
security.

As Upayavira says, the Admin UI must handle authentication just as any other 
Solr client, we cannot have some “backdoor” for the UI only. But we could 
potentially allow two or more auth plugins active at the same time, so the 
Admin UI can always be used even if user has configured an auth plugin that the 
UI does not support. We already have implicit support for PKI auth at all times.

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Priority: Major
>  Labels: authentication, login, password
>
> Out of the box, the Solr Administrative interface should require a password 
> that the user is required to set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12120) New plugin type AuditLoggerPlugin

2018-03-29 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419523#comment-16419523
 ] 

Hrishikesh Gadre edited comment on SOLR-12120 at 3/29/18 6:35 PM:
--

{quote} * 
 ** If the latter, i.e. on all nodes, what to use as the "search id" to be able 
to correlate the events from each replica as belonging to the same end-user 
search?{quote}
Not sure if audit log plugin needs to worry about correlation. Typically 
tracing frameworks (e.g. HTrace) provide such functionality.

 

 


was (Author: hgadre):
{quote} * 
 ** If the latter, i.e. on all nodes, what to use as the "search id" to be able 
to correlate the events from each replica as belonging to the same end-user 
search?{quote}
Not sure if audit log plugin needs to do worry about correlation. Typically 
tracing frameworks (e.g. HTrace) provide such functionality.

 

 

> New plugin type AuditLoggerPlugin
> -
>
> Key: SOLR-12120
> URL: https://issues.apache.org/jira/browse/SOLR-12120
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Solr needs a well defined plugin point to implement audit logging 
> functionality, which is independent from whatever {{AuthenticationPlugin}} or 
> {{AuthorizationPlugin}} are in use at the time.
> It seems reasonable to introduce a new plugin type {{AuditLoggerPlugin}}. It 
> could be configured in solr.xml or it could be a third type of plugin defined 
> in {{security.json}}, i.e.
> {code:java}
> {
>   "authentication" : { "class" : ... },
>   "authorization" : { "class" : ... },
>   "auditlogging" : { "class" : "x.y.MyAuditLogger", ... }
> }
> {code}
> We could then instrument SolrDispatchFilter to the audit plugin with an 
> AuditEvent at important points such as successful authentication:
> {code:java}
> auditLoggerPlugin.audit(new SolrAuditEvent(EventType.AUTHENTICATED, 
> request)); 
> {code}
>  We will mark the impl as {{@lucene.experimental}} in the first release to 
> let it settle as people write their own plugin implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-03-29 Thread Aaron Greenspan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419536#comment-16419536
 ] 

Aaron Greenspan commented on SOLR-7896:
---

I agree with Gus that the primary issue here is just getting some kind of 
simple protection for the admin UI in place.

Maybe there's a better solution than the key I've proposed, but I would note 
that the worst-case scenario of the server being "forever compromised" is 
already the default way Solr works now. Everything is open and effectively 
pre-compromised. If browser development tools can see requests to a Solr 
back-end to discover my hypothetical key, they can already see requests to the 
server and can discover everything in the store, so something is wrong with how 
the developer built their site. (I'd think Solr requests should be going on in 
the background, not in some client-side JavaScript call.) Furthermore, all of 
the general arguments as to why a key would be insecure could be made for any 
password authentication scheme (someone could discover it, it should be changed 
regularly, etc.).

My point was that users should not be sending their admin passwords in a HTTP 
GET string. So a randomly-generated key would be preferable given that Solr 
works that way.

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Priority: Major
>  Labels: authentication, login, password
>
> Out of the box, the Solr Administrative interface should require a password 
> that the user is required to set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12120) New plugin type AuditLoggerPlugin

2018-03-29 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419523#comment-16419523
 ] 

Hrishikesh Gadre commented on SOLR-12120:
-

{quote} * 
 ** If the latter, i.e. on all nodes, what to use as the "search id" to be able 
to correlate the events from each replica as belonging to the same end-user 
search?{quote}
Not sure if audit log plugin needs to do worry about correlation. Typically 
tracing frameworks (e.g. HTrace) provide such functionality.

 

 

> New plugin type AuditLoggerPlugin
> -
>
> Key: SOLR-12120
> URL: https://issues.apache.org/jira/browse/SOLR-12120
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Solr needs a well defined plugin point to implement audit logging 
> functionality, which is independent from whatever {{AuthenticationPlugin}} or 
> {{AuthorizationPlugin}} are in use at the time.
> It seems reasonable to introduce a new plugin type {{AuditLoggerPlugin}}. It 
> could be configured in solr.xml or it could be a third type of plugin defined 
> in {{security.json}}, i.e.
> {code:java}
> {
>   "authentication" : { "class" : ... },
>   "authorization" : { "class" : ... },
>   "auditlogging" : { "class" : "x.y.MyAuditLogger", ... }
> }
> {code}
> We could then instrument SolrDispatchFilter to the audit plugin with an 
> AuditEvent at important points such as successful authentication:
> {code:java}
> auditLoggerPlugin.audit(new SolrAuditEvent(EventType.AUTHENTICATED, 
> request)); 
> {code}
>  We will mark the impl as {{@lucene.experimental}} in the first release to 
> let it settle as people write their own plugin implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12160) Document Time Routed Aliases separate from API

2018-03-29 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419522#comment-16419522
 ] 

Cassandra Targett commented on SOLR-12160:
--

+1 looks good David.

As for where..."How SolrCloud Works" seems all right based on the available 
section options at the moment.

> Document Time Routed Aliases separate from API
> --
>
> Key: SOLR-12160
> URL: https://issues.apache.org/jira/browse/SOLR-12160
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: time-routed-aliases.adoc
>
>
> Time Routed Aliases ought to have some documentation that is apart from the 
> API details which are already documented (thanks to Gus for that part).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-03-29 Thread Gus Heck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419519#comment-16419519
 ] 

Gus Heck commented on SOLR-7896:


[~thinkcomp] While this could be implemented, permanent key systems are not 
very secure. If they key is lifted (i.e. from browser dev tools) by someone 
nefarious (think disgruntled employee for example, or code bug exposing the key 
on a request), your server is forever compromised. Unless you have some 
protocol for regenerating the key regularly, and then getting that out to the 
clients that *should* have it, you're hosed. I for one wouldn't want to invest 
time in building something like that as it will be eschewed by anyone truly 
serious about security.

Also as you point out roles are likely to be desirable. But I think we are in 
danger of mixing two things here... Authentication and Authorization. My read 
of the original ticket is that this was about adding an Authentication check 
only, and only for a single admin user. A separate issue designing a fine 
grained permission-role-user mapping system should be filed if authorization 
beyond all or nothing is desired.

The initial password setting routine however sounds good. Perhaps all requests 
to api or UI should get redirected to the password setting page when solr is 
started with passworded admin enabled.

 

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Priority: Major
>  Labels: authentication, login, password
>
> Out of the box, the Solr Administrative interface should require a password 
> that the user is required to set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12120) New plugin type AuditLoggerPlugin

2018-03-29 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419518#comment-16419518
 ] 

Hrishikesh Gadre commented on SOLR-12120:
-

[~janhoy] Sorry for late reply.
{quote}Should we strive to have only *one* audit log event per Solr request, or 
is it common to have multiple as currently done in this patch, i.e. one for 
successful authentication and another for authorization?
{quote}
I think there is no need to log authentication success events when 
authorization is configured. So in that case we can just track authentication 
failures. It may also be a good idea to support suppressing some of these 
events (e.g. a user may only care about actions performed by authenticated 
users. So we may not want to generate authentication failure events in that 
case).
{quote} * Should we log internal requests, i.e. overseer actions, or requests 
stemming from auto-scaling triggers etc?{quote}
I don't think audit log plugin needs to care about internal vs. external 
requests. It should just log every incoming request. At least this is how I 
have implemented audit logs for Solr in Sentry.
{quote}For distributed requests, should we log only on the first node, or on 
every replica that the request is distributed to?
{quote}
Same as above. Just by logging every incoming request, we can avoid all these 
complications.

 

> New plugin type AuditLoggerPlugin
> -
>
> Key: SOLR-12120
> URL: https://issues.apache.org/jira/browse/SOLR-12120
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Solr needs a well defined plugin point to implement audit logging 
> functionality, which is independent from whatever {{AuthenticationPlugin}} or 
> {{AuthorizationPlugin}} are in use at the time.
> It seems reasonable to introduce a new plugin type {{AuditLoggerPlugin}}. It 
> could be configured in solr.xml or it could be a third type of plugin defined 
> in {{security.json}}, i.e.
> {code:java}
> {
>   "authentication" : { "class" : ... },
>   "authorization" : { "class" : ... },
>   "auditlogging" : { "class" : "x.y.MyAuditLogger", ... }
> }
> {code}
> We could then instrument SolrDispatchFilter to the audit plugin with an 
> AuditEvent at important points such as successful authentication:
> {code:java}
> auditLoggerPlugin.audit(new SolrAuditEvent(EventType.AUTHENTICATED, 
> request)); 
> {code}
>  We will mark the impl as {{@lucene.experimental}} in the first release to 
> let it settle as people write their own plugin implementations.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-reference-guide-master - Build # 6381 - Still Failing

2018-03-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/6381/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on websites1 (git-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 1ce72537b8b7577657c275dd7a6bfbb081392575 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1ce72537b8b7577657c275dd7a6bfbb081392575
Commit message: "LUCENE-8106: add missing import"
 > git rev-list --no-walk 701af06f627be98ddc8db083dc4dd51dbfe4936a # timeout=10
No emails were triggered.
[Solr-reference-guide-master] $ /usr/bin/env bash 
/tmp/jenkins1763685098990204278.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.3.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.3/1.29.3.tar.gz.asc
gpg: Signature made Sun 10 Sep 2017 08:59:21 PM UTC using RSA key ID BF04FF17
gpg: Good signature from "Michal Papis (RVM signing) "
gpg: aka "Michal Papis "
gpg: aka "[jpeg image of size 5015]"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:  There is no indication that the signature belongs to the owner.
Primary key fingerprint: 409B 6B17 96C2 7546 2A17  0311 3804 BB82 D39D C0E3
 Subkey fingerprint: 62C9 E5F4 DA30 0D94 AC36  166B E206 C29F BF04 FF17
GPG verified '/home/jenkins/.rvm/archives/rvm-1.29.3.tgz'

Upgrading the RVM installation in /home/jenkins/.rvm/
Upgrade of RVM in /home/jenkins/.rvm/ is complete.

Upgrade Notes:

  * No new notes to display.

+ set +x
Running 'source /home/jenkins/.rvm/scripts/rvm'
Running 'rvm autolibs disable'
Running 'rvm install ruby-2.3.3'
Already installed ruby-2.3.3.
To reinstall use:

rvm reinstall ruby-2.3.3

Running 'rvm gemset create solr-refguide-gemset'
ruby-2.3.3 - #gemset created 
/home/jenkins/.rvm/gems/ruby-2.3.3@solr-refguide-gemset
ruby-2.3.3 - #generating solr-refguide-gemset 
wrappers|/-\|/-\|.-\|/-\|/-.|/-\|/-\|.-\|/-\|/-.|/-\|/-\|.-\|/-\|/-.|/-\|/-\|.-\|/-.
Running 'rvm ruby-2.3.3@solr-refguide-gemset'
Using /home/jenkins/.rvm/gems/ruby-2.3.3 with gemset solr-refguide-gemset
Running 'gem install --force --version 3.5.0 jekyll'
Successfully installed jekyll-3.5.0
Parsing documentation for jekyll-3.5.0
Done installing documentation for jekyll after 1 seconds
1 gem installed
Running 'gem install --force --version 2.1.0 jekyll-asciidoc'
Successfully installed jekyll-asciidoc-2.1.0
Parsing documentation for jekyll-asciidoc-2.1.0
Done installing documentation for jekyll-asciidoc after 0 seconds
1 gem installed
Running 'gem install --force --version 1.1.2 pygments.rb'
Successfully installed pygments.rb-1.1.2
Parsing documentation for pygments.rb-1.1.2
Done installing documentation for pygments.rb after 0 seconds
1 gem installed
Running 'ant ivy-bootstrap'
Buildfile: 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/build.xml

-ivy-bootstrap1:
 [echo] installing ivy 2.4.0 to /home/jenkins/.ant/lib
  [get] Getting: 
http://repo1.maven.org/maven2/org/apache/ivy/ivy/2.4.0/ivy-2.4.0.jar
  [get] To: /home/jenkins/.ant/lib/ivy-2.4.0.jar
  [get] Not modified - so not downloaded

-ivy-bootstrap2:

-ivy-checksum:

-ivy-remove-old-versions:

ivy-bootstrap:

BUILD SUCCESSFUL
Total time: 0 seconds
+ ant clean build-site build-pdf
Buildfile: 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/solr-ref-guide/build.xml

clean:

build-init:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
 [echo] Copying all non template files from src ...
 [copy] Copying 410 files to 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
 [echo] Copy (w/prop replacement) any template files from src...
 [copy] Copying 1 file to 

[jira] [Commented] (SOLR-9685) tag a query in JSON syntax

2018-03-29 Thread Dmitry Tikhonov (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419504#comment-16419504
 ] 

Dmitry Tikhonov commented on SOLR-9685:
---

FYI here is a pull request - https://github.com/apache/lucene-solr/pull/347

> tag a query in JSON syntax
> --
>
> Key: SOLR-9685
> URL: https://issues.apache.org/jira/browse/SOLR-9685
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module, JSON Request API
>Reporter: Yonik Seeley
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> There should be a way to tag a query/filter in JSON syntax.
> Perhaps these two forms could be equivalent:
> {code}
> "{!tag=COLOR}color:blue"
> { tagged : { COLOR : "color:blue" }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12163) Ref Guide: Improve Setting Up an External ZK Ensemble page

2018-03-29 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419503#comment-16419503
 ] 

Cassandra Targett commented on SOLR-12163:
--

Instead of a patch I'm putting up the whole revised file since there are so 
many changes to the page. If anyone who knows ZK setup well has time to take a 
look, I'd appreciate it - otherwise I'll try to commit the changes sometime 
next week.

> Ref Guide: Improve Setting Up an External ZK Ensemble page
> --
>
> Key: SOLR-12163
> URL: https://issues.apache.org/jira/browse/SOLR-12163
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
> Fix For: 7.4
>
> Attachments: setting-up-an-external-zookeeper-ensemble.adoc
>
>
> I had to set up a ZK ensemble the other day for the first time in a while, 
> and thought I'd test our docs on the subject while I was at it. I headed over 
> to 
> https://lucene.apache.org/solr/guide/setting-up-an-external-zookeeper-ensemble.html,
>  and...Well, I still haven't gotten back to what I was trying to do, but I 
> rewrote the entire page.
> The problem to me is that the page today is mostly a stripped down copy of 
> the ZK Getting Started docs: walking through setting up a single ZK instance 
> before introducing the idea of an ensemble and going back through the same 
> configs again to update them for the ensemble.
> IOW, despite the page being titled "setting up an ensemble", it's mostly 
> about not setting up an ensemble. That's at the end of the page, which itself 
> focuses a bit heavily on the use case of running an ensemble on a single 
> server (so, if you're counting...that's 3 use cases we don't want people to 
> use discussed in detail on a page that's supposedly about _not_ doing any of 
> those things).
> So, I took all of it and restructured the whole thing to focus primarily on 
> the use case we want people to use: running 3 ZK nodes on different machines. 
> Running 3 on one machine is still there, but noted in passing with the 
> appropriate caveats. I've also added information about choosing to use a 
> chroot, which AFAICT was only covered in the section on Taking Solr to 
> Production.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12163) Ref Guide: Improve Setting Up an External ZK Ensemble page

2018-03-29 Thread Cassandra Targett (JIRA)
Cassandra Targett created SOLR-12163:


 Summary: Ref Guide: Improve Setting Up an External ZK Ensemble page
 Key: SOLR-12163
 URL: https://issues.apache.org/jira/browse/SOLR-12163
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Cassandra Targett
Assignee: Cassandra Targett
 Fix For: 7.4
 Attachments: setting-up-an-external-zookeeper-ensemble.adoc

I had to set up a ZK ensemble the other day for the first time in a while, and 
thought I'd test our docs on the subject while I was at it. I headed over to 
https://lucene.apache.org/solr/guide/setting-up-an-external-zookeeper-ensemble.html,
 and...Well, I still haven't gotten back to what I was trying to do, but I 
rewrote the entire page.

The problem to me is that the page today is mostly a stripped down copy of the 
ZK Getting Started docs: walking through setting up a single ZK instance before 
introducing the idea of an ensemble and going back through the same configs 
again to update them for the ensemble.

IOW, despite the page being titled "setting up an ensemble", it's mostly about 
not setting up an ensemble. That's at the end of the page, which itself focuses 
a bit heavily on the use case of running an ensemble on a single server (so, if 
you're counting...that's 3 use cases we don't want people to use discussed in 
detail on a page that's supposedly about _not_ doing any of those things).

So, I took all of it and restructured the whole thing to focus primarily on the 
use case we want people to use: running 3 ZK nodes on different machines. 
Running 3 on one machine is still there, but noted in passing with the 
appropriate caveats. I've also added information about choosing to use a 
chroot, which AFAICT was only covered in the section on Taking Solr to 
Production.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #347: SOLR-9685 tag a query in JSON syntax

2018-03-29 Thread squallsama
GitHub user squallsama opened a pull request:

https://github.com/apache/lucene-solr/pull/347

SOLR-9685 tag a query in JSON syntax

Add support of tagging queries in JSON QUERY DSL

Support following structure: {"tagged": {"name": "RCOLOR","query": { 
"term": { "f": "color","v": "blue"

This can be used in json.facet="{colors: { ype:terms, field:color, domain:{ 
excludeTags: RCOLOR} } }"

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/squallsama/lucene-solr 7_3_SOLR-9685

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/347.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #347


commit 82b8f28e36a3c09991f8eb96216c78178c63cbf5
Author: Dmitry Tikhonov 
Date:   2018-03-29T17:54:47Z

SOLR-9685 tag a query in JSON syntax

Support following structure: {"tagged": {"name": "RCOLOR","query": { 
"term": { "f": "color","v": "blue"




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-repro - Build # 381 - Failure

2018-03-29 Thread Steve Rowe
My attempted fix for this problem (catching IncompleteRead and retrying) 
failed, I think because I didn’t add the correct import statement - from the 
log:

-
  File "dev-tools/scripts/reproduceJenkinsFailures.py", line 131, in 
fetchAndParseJenkinsLog
except http.client.IncompleteRead as e:
NameError: name 'http' is not defined
-

I’ve committed a fix to master and branch_7x, we’ll see if that fixes it.

--
Steve
www.lucidworks.com

> On Mar 29, 2018, at 1:29 PM, Apache Jenkins Server 
>  wrote:
> 
> Build: https://builds.apache.org/job/Lucene-Solr-repro/381/
> 
> [...truncated 36 lines...]
> [repro] Jenkins log URL: 
> https://builds.apache.org/job/Lucene-Solr-Tests-7.x/534/consoleText
> 
> [repro] Revision: 779171533ae0191a209a17e114b569313910f4f4
> 
> [...truncated 31 lines...]
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8106) Add script to attempt to reproduce failing tests from a Jenkins log

2018-03-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419483#comment-16419483
 ] 

ASF subversion and git services commented on LUCENE-8106:
-

Commit 06e43084a57d29fd9dc176dfec145c148c2b5e50 in lucene-solr's branch 
refs/heads/branch_7x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=06e4308 ]

LUCENE-8106: add missing import


> Add script to attempt to reproduce failing tests from a Jenkins log
> ---
>
> Key: LUCENE-8106
> URL: https://issues.apache.org/jira/browse/LUCENE-8106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: LUCENE-8106-part2.patch, LUCENE-8106-part3.patch, 
> LUCENE-8106-part4.patch, LUCENE-8106.part5.patch, LUCENE-8106.patch, 
> LUCENE-8106.patch
>
>
> This script will be runnable from a downstream job triggered by an upstream 
> failing Jenkins job, passing log location info between the two.
> The script will also be runnable manually from a developer's cmdline.
> From the script help:
> {noformat}
> Usage:
>  python3 -u reproduceJenkinsFailures.py URL
> Must be run from a Lucene/Solr git workspace. Downloads the Jenkins
> log pointed to by the given URL, parses it for Git revision and failed
> Lucene/Solr tests, checks out the Git revision in the local workspace,
> groups the failed tests by module, then runs
> 'ant test -Dtest.dups=5 -Dtests.class="*.test1[|*.test2[...]]" ...'
> in each module of interest, failing at the end if any of the runs fails.
> To control the maximum number of concurrent JVMs used for each module's
> test run, set 'tests.jvms', e.g. in ~/lucene.build.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8106) Add script to attempt to reproduce failing tests from a Jenkins log

2018-03-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419484#comment-16419484
 ] 

ASF subversion and git services commented on LUCENE-8106:
-

Commit 1ce72537b8b7577657c275dd7a6bfbb081392575 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1ce7253 ]

LUCENE-8106: add missing import


> Add script to attempt to reproduce failing tests from a Jenkins log
> ---
>
> Key: LUCENE-8106
> URL: https://issues.apache.org/jira/browse/LUCENE-8106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Fix For: 7.3, master (8.0)
>
> Attachments: LUCENE-8106-part2.patch, LUCENE-8106-part3.patch, 
> LUCENE-8106-part4.patch, LUCENE-8106.part5.patch, LUCENE-8106.patch, 
> LUCENE-8106.patch
>
>
> This script will be runnable from a downstream job triggered by an upstream 
> failing Jenkins job, passing log location info between the two.
> The script will also be runnable manually from a developer's cmdline.
> From the script help:
> {noformat}
> Usage:
>  python3 -u reproduceJenkinsFailures.py URL
> Must be run from a Lucene/Solr git workspace. Downloads the Jenkins
> log pointed to by the given URL, parses it for Git revision and failed
> Lucene/Solr tests, checks out the Git revision in the local workspace,
> groups the failed tests by module, then runs
> 'ant test -Dtest.dups=5 -Dtests.class="*.test1[|*.test2[...]]" ...'
> in each module of interest, failing at the end if any of the runs fails.
> To control the maximum number of concurrent JVMs used for each module's
> test run, set 'tests.jvms', e.g. in ~/lucene.build.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12087) Deleting replicas sometimes fails and causes the replicas to exist in the down state

2018-03-29 Thread Jerry Bao (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419464#comment-16419464
 ] 

Jerry Bao commented on SOLR-12087:
--

Can we get this fix backported to 7.3 and have a 7.3.1?

> Deleting replicas sometimes fails and causes the replicas to exist in the 
> down state
> 
>
> Key: SOLR-12087
> URL: https://issues.apache.org/jira/browse/SOLR-12087
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.2
>Reporter: Jerry Bao
>Assignee: Cao Manh Dat
>Priority: Critical
> Fix For: 7.4
>
> Attachments: SOLR-12087.patch, SOLR-12087.patch, SOLR-12087.patch, 
> SOLR-12087.test.patch, Screen Shot 2018-03-16 at 11.50.32 AM.png
>
>
> Sometimes when deleting replicas, the replica fails to be removed from the 
> cluster state. This occurs especially when deleting replicas en mass; the 
> resulting cause is that the data is deleted but the replicas aren't removed 
> from the cluster state. Attempting to delete the downed replicas causes 
> failures because the core does not exist anymore.
> This also occurs when trying to move replicas, since that move is an add and 
> delete.
> Some more information regarding this issue; when the MOVEREPLICA command is 
> issued, the new replica is created successfully but the replica to be deleted 
> fails to be removed from state.json (the core is deleted though) and we see 
> two logs spammed.
>  # The node containing the leader replica continually (read every second) 
> attempts to initiate recovery on the replica and fails to do so because the 
> core does not exist. As a result it continually publishes a down state for 
> the replica to zookeeper.
>  # The deleted replica node spams that it cannot locate the core because it's 
> been deleted.
> During this period of time, we see an increase in ZK network connectivity 
> overall, until the replica is finally deleted (spamming DELETEREPLICA on the 
> shard until its removed from the state)
> My guess is there's two issues at hand here:
>  # The leader continually attempts to recover a downed replica that is 
> unrecoverable because the core does not exist.
>  # The replica to be deleted is having trouble being deleted from state.json 
> in ZK.
> This is mostly consistent for my use case. I'm running 7.2.1 with 66 nodes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 381 - Failure

2018-03-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/381/

[...truncated 36 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/534/consoleText

[repro] Revision: 779171533ae0191a209a17e114b569313910f4f4

[...truncated 31 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-11929) TestRecovery failures

2018-03-29 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419428#comment-16419428
 ] 

Steve Rowe commented on SOLR-11929:
---

Another reproducing seed (again without {{\-Dtests.method=...}}), from 
[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1769/]:

{noformat}
Checking out Revision 668b81721fa5b539d9286ed2f464426a598c352a 
(refs/remotes/origin/master)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRecovery 
-Dtests.method=testBuffering -Dtests.seed=D6674863F4F03A58 -Dtests.slow=true 
-Dtests.locale=lt -Dtests.timezone=Africa/Conakry -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] ERROR   0.02s J0 | TestRecovery.testBuffering <<<
   [junit4]> Throwable #1: java.lang.NullPointerException
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([D6674863F4F03A58:CB89E64855A99B73]:0)
   [junit4]>at 
org.apache.solr.search.TestRecovery.testBuffering(TestRecovery.java:495)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
[...]
   [junit4]   2> 1992912 ERROR 
(TEST-TestRecovery.testRecoveryMultipleLogs-seed#[D6674863F4F03A58]) [
x:collection1] o.a.s.SolrTestCaseJ4 query failed JSON validation. 
error=mismatch: '6'!='3' @ response/numFound
   [junit4]   2>  expected =/response/numFound==6
   [junit4]   2>  response = {
   [junit4]   2>   "responseHeader":{
   [junit4]   2> "status":0,
   [junit4]   2> "QTime":0},
   [junit4]   2>   "response":{"numFound":3,"start":0,"docs":[
   [junit4]   2>   {
   [junit4]   2> "id":"aa",
   [junit4]   2> "_version_":1596276857454460929},
   [junit4]   2>   {
   [junit4]   2> "id":"bb",
   [junit4]   2> "_version_":1596276857455509504},
   [junit4]   2>   {
   [junit4]   2> "id":"cc",
   [junit4]   2> "_version_":1596276857455509505}]
   [junit4]   2>   }}
   [junit4]   2> 
   [junit4]   2>  request = q=*:*=xml
   [junit4]   2> 1992912 INFO  
(TEST-TestRecovery.testRecoveryMultipleLogs-seed#[D6674863F4F03A58]) [
x:collection1] o.a.s.SolrTestCaseJ4 ###Ending testRecoveryMultipleLogs
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRecovery 
-Dtests.method=testRecoveryMultipleLogs -Dtests.seed=D6674863F4F03A58 
-Dtests.slow=true -Dtests.locale=lt -Dtests.timezone=Africa/Conakry 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII
   [junit4] ERROR   0.36s J0 | TestRecovery.testRecoveryMultipleLogs <<<
   [junit4]> Throwable #1: java.lang.RuntimeException: mismatch: '6'!='3' @ 
response/numFound
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([D6674863F4F03A58:6FF2D052738B7DB1]:0)
   [junit4]>at 
org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:1002)
   [junit4]>at 
org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:949)
   [junit4]>at 
org.apache.solr.search.TestRecovery.testRecoveryMultipleLogs(TestRecovery.java:1448)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
[...]
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{_root_=Lucene50(blocksize=128), id=FSTOrd50}, 
docValues:{_version_=DocValuesFormat(name=Lucene70), 
val_i_dvo=DocValuesFormat(name=Asserting)}, maxPointsInLeafNode=647, 
maxMBSortInHeap=6.830494683367925, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@2da1e99a),
 locale=lt, timezone=Africa/Conakry
   [junit4]   2> NOTE: SunOS 5.11 amd64/Oracle Corporation 1.8.0_162 
(64-bit)/cpus=3,threads=1,free=161610624,total=473956352
{noformat}


> TestRecovery failures
> -
>
> Key: SOLR-11929
> URL: https://issues.apache.org/jira/browse/SOLR-11929
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Priority: Major
>
> My Jenkins found a branch_7x seed for {{TestRecovery.testBuffering()}} and 
> {{TestRecovery.testCorruptLog()}} that reproduces for me 5/5 times (when I 
> exclude {{-Dtests.method=...}} from the cmdline):
> {noformat}
> Checking out Revision 1ef988a26378137b1e1f022985dacee1f557f4fc 
> (refs/remotes/origin/branch_7x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestRecovery 
> -Dtests.method=testBuffering -Dtests.seed=FC96FD26F8A8CC6F -Dtests.slow=true 
> -Dtests.locale=de-GR -Dtests.timezone=Europe/London -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 0.02s J3  | TestRecovery.testBuffering <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<1> but 
> was:<3>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([FC96FD26F8A8CC6F:E178530D59F16D44]:0)
>[junit4]>  at 
> org.apache.solr.search.TestRecovery.testBuffering(TestRecovery.java:494)
>[junit4]>  

[JENKINS-EA] Lucene-Solr-7.x-Windows (64bit/jdk-11-ea+5) - Build # 523 - Still Unstable!

2018-03-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/523/
Java: 64bit/jdk-11-ea+5 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

10 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.testTriggerThrottling

Error Message:
java.util.concurrent.ExecutionException: java.io.IOException: 
org.apache.solr.api.ApiBag$ExceptionWithErrObject: Error in command payload, 
errors: [{suspend-trigger={name=.scheduled_maintenance}, errorMessages=[No 
trigger exists with name: .scheduled_maintenance]}], 

Stack Trace:
java.io.IOException: java.util.concurrent.ExecutionException: 
java.io.IOException: org.apache.solr.api.ApiBag$ExceptionWithErrObject: Error 
in command payload, errors: [{suspend-trigger={name=.scheduled_maintenance}, 
errorMessages=[No trigger exists with name: .scheduled_maintenance]}], 
at 
__randomizedtesting.SeedInfo.seed([57A3966F90A73605:AC813E4A420DD597]:0)
at 
org.apache.solr.cloud.autoscaling.sim.SimCloudManager.request(SimCloudManager.java:462)
at 
org.apache.solr.cloud.autoscaling.sim.SimCloudManager$1.request(SimCloudManager.java:336)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration.setupTest(TestTriggerIntegration.java:119)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:968)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
   

[jira] [Commented] (LUCENE-8227) TestGeo3DPoint.testGeo3DRelations() reproducing failures

2018-03-29 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8227?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419419#comment-16419419
 ] 

Adrien Grand commented on LUCENE-8227:
--

bq. I was forced to use Ignore annotation instead, since I couldn't figure out 
what was wrong with AwaitsFix.

I think this is because you need to do {{@AwaitsFix(bugUrl="http://foo;)}} 
rather than {{@AwaitsFix("http://foo;)}}.

> TestGeo3DPoint.testGeo3DRelations() reproducing failures
> 
>
> Key: LUCENE-8227
> URL: https://issues.apache.org/jira/browse/LUCENE-8227
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/test, modules/spatial3d
>Reporter: Steve Rowe
>Assignee: Karl Wright
>Priority: Blocker
>
> Three failures: two NPEs and one assert "assess edge that ends in a crossing 
> can't both up and down":
> 1.a. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1512/]:
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPoint 
> -Dtests.method=testGeo3DRelations -Dtests.seed=C1F88333EC85EAE0 
> -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
>  -Dtests.locale=ga -Dtests.timezone=America/Ojinaga -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] ERROR   10.4s J1 | TestGeo3DPoint.testGeo3DRelations <<<
>[junit4]> Throwable #1: java.lang.NullPointerException
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([C1F88333EC85EAE0:7187FEA763C8447C]:0)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:569)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:646)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.isWithin(GeoComplexPolygon.java:370)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseMembershipShape.isWithin(GeoBaseMembershipShape.java:36)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoBaseShape.getBounds(GeoBaseShape.java:35)
>[junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon.getBounds(GeoComplexPolygon.java:440)
>[junit4]>  at 
> org.apache.lucene.spatial3d.TestGeo3DPoint.testGeo3DRelations(TestGeo3DPoint.java:225)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {noformat}
> 1.b. (NPE) From 
> [https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/184/]:
> {noformat}
>[smoker][junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestGeo3DPoint -Dtests.method=testGeo3DRelations 
> -Dtests.seed=F2A368AB96A2FD75 -Dtests.multiplier=2 -Dtests.locale=fr-ML 
> -Dtests.timezone=America/Godthab -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[smoker][junit4] ERROR   0.99s J0 | TestGeo3DPoint.testGeo3DRelations 
> <<<
>[smoker][junit4]> Throwable #1: java.lang.NullPointerException
>[smoker][junit4]>  at 
> __randomizedtesting.SeedInfo.seed([F2A368AB96A2FD75:42DC153F19EF53E9]:0)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.countCrossingPoint(GeoComplexPolygon.java:1382)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$DualCrossingEdgeIterator.matches(GeoComplexPolygon.java:1283)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:564)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Node.traverse(GeoComplexPolygon.java:572)
>[smoker][junit4]>  at 
> org.apache.lucene.spatial3d.geom.GeoComplexPolygon$Tree.traverse(GeoComplexPolygon.java:660)
>[smoker][junit4]>  at 
> 

Re: [VOTE] Release Lucene/Solr 7.3.0 RC2

2018-03-29 Thread Steve Rowe
(resending because I replied to the wrong thread earlier)

+1

Docs, changes and javadocs look good.

Smoke tester says: SUCCESS! [0:35:05.242678]

--
Steve
www.lucidworks.com

> On Mar 28, 2018, at 1:11 PM, Alan Woodward  wrote:
> 
> Please vote for release candidate 2 for Lucene/Solr 7.3.0
> 
> The artefacts can be downloaded from:
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.3.0-RC2-rev98a6b3d642928b1ac9076c6c5a369472581f7633
> 
> You can run the smoke tester directly with this command:
> python3 -u dev-tools/scripts/smokeTestRelease.py 
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-7.3.0-RC2-rev98a6b3d642928b1ac9076c6c5a369472581f7633
> 
> Here’s my +1
> SUCCESS! [1:08:28.045253]
> 
> 
> Note that this vote will be open a little longer than usual as it’s a Bank 
> Holiday weekend in the UK.  If there are no -1s, the vote will close on 
> Tuesday April 3rd.


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Solr-reference-guide-master - Build # 6380 - Failure

2018-03-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-reference-guide-master/6380/

Log: 
Started by timer
[EnvInject] - Loading node environment variables.
Building remotely on websites1 (git-websites) in workspace 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git fetch --tags --progress git://git.apache.org/lucene-solr.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10
Checking out Revision 701af06f627be98ddc8db083dc4dd51dbfe4936a 
(refs/remotes/origin/master)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 701af06f627be98ddc8db083dc4dd51dbfe4936a
Commit message: "SOLR-12136: Docs: Improve hl.fl, hl.q, hl.qparser"
 > git rev-list --no-walk 358e59596d17ba34452ea923e048afee6233d597 # timeout=10
No emails were triggered.
[Solr-reference-guide-master] $ /usr/bin/env bash 
/tmp/jenkins4879367065475874378.sh
+ set -e
+ RVM_PATH=/home/jenkins/.rvm
+ RUBY_VERSION=ruby-2.3.3
+ GEMSET=solr-refguide-gemset
+ curl -sSL https://get.rvm.io
+ bash -s -- --ignore-dotfiles stable
Turning on ignore dotfiles mode.
Downloading https://github.com/rvm/rvm/archive/1.29.3.tar.gz
Downloading 
https://github.com/rvm/rvm/releases/download/1.29.3/1.29.3.tar.gz.asc
gpg: Signature made Sun 10 Sep 2017 08:59:21 PM UTC using RSA key ID BF04FF17
gpg: Good signature from "Michal Papis (RVM signing) "
gpg: aka "Michal Papis "
gpg: aka "[jpeg image of size 5015]"
gpg: WARNING: This key is not certified with a trusted signature!
gpg:  There is no indication that the signature belongs to the owner.
Primary key fingerprint: 409B 6B17 96C2 7546 2A17  0311 3804 BB82 D39D C0E3
 Subkey fingerprint: 62C9 E5F4 DA30 0D94 AC36  166B E206 C29F BF04 FF17
GPG verified '/home/jenkins/.rvm/archives/rvm-1.29.3.tgz'

Upgrading the RVM installation in /home/jenkins/.rvm/
Upgrade of RVM in /home/jenkins/.rvm/ is complete.

Upgrade Notes:

  * No new notes to display.

+ set +x
Running 'source /home/jenkins/.rvm/scripts/rvm'
Running 'rvm autolibs disable'
Running 'rvm install ruby-2.3.3'
Already installed ruby-2.3.3.
To reinstall use:

rvm reinstall ruby-2.3.3

Running 'rvm gemset create solr-refguide-gemset'
ruby-2.3.3 - #gemset created 
/home/jenkins/.rvm/gems/ruby-2.3.3@solr-refguide-gemset
ruby-2.3.3 - #generating solr-refguide-gemset 
wrappers|/-\|/-\|.-\|/-\|/-.|/-\|/-\|.-\|/-\|/-.|/-\|/-\|.-\|/-\|/-.|/-\|/-\|.-\|/-.
Running 'rvm ruby-2.3.3@solr-refguide-gemset'
Using /home/jenkins/.rvm/gems/ruby-2.3.3 with gemset solr-refguide-gemset
Running 'gem install --force --version 3.5.0 jekyll'
Successfully installed jekyll-3.5.0
Parsing documentation for jekyll-3.5.0
Done installing documentation for jekyll after 1 seconds
1 gem installed
Running 'gem install --force --version 2.1.0 jekyll-asciidoc'
Successfully installed jekyll-asciidoc-2.1.0
Parsing documentation for jekyll-asciidoc-2.1.0
Done installing documentation for jekyll-asciidoc after 0 seconds
1 gem installed
Running 'gem install --force --version 1.1.2 pygments.rb'
Successfully installed pygments.rb-1.1.2
Parsing documentation for pygments.rb-1.1.2
Done installing documentation for pygments.rb after 0 seconds
1 gem installed
Running 'ant ivy-bootstrap'
Buildfile: 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/build.xml

-ivy-bootstrap1:
 [echo] installing ivy 2.4.0 to /home/jenkins/.ant/lib
  [get] Getting: 
http://repo1.maven.org/maven2/org/apache/ivy/ivy/2.4.0/ivy-2.4.0.jar
  [get] To: /home/jenkins/.ant/lib/ivy-2.4.0.jar
  [get] Not modified - so not downloaded

-ivy-bootstrap2:

-ivy-checksum:

-ivy-remove-old-versions:

ivy-bootstrap:

BUILD SUCCESSFUL
Total time: 0 seconds
+ ant clean build-site build-pdf
Buildfile: 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/solr-ref-guide/build.xml

clean:

build-init:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
 [echo] Copying all non template files from src ...
 [copy] Copying 410 files to 
/home/jenkins/jenkins-slave/workspace/Solr-reference-guide-master/solr/build/solr-ref-guide/content
 [echo] Copy (w/prop replacement) any template files from src...
 [copy] Copying 1 file to 

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 26 - Still unstable

2018-03-29 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/26/

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.api.collections.ShardSplitTest

Error Message:
85 threads leaked from SUITE scope at 
org.apache.solr.cloud.api.collections.ShardSplitTest: 1) Thread[id=933, 
name=qtp73158734-933, state=RUNNABLE, group=TGRP-ShardSplitTest] at 
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) at 
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269) at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93) at 
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97) at 
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101) at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:375)
 at 
org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:304)
 at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:179)
 at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.produce(EatWhatYouKill.java:140)
 at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:131)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:382)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)2) Thread[id=940, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-ShardSplitTest]
 at java.lang.Thread.sleep(Native Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:748)3) Thread[id=1028, 
name=Scheduler-534868668, state=TIMED_WAITING, group=TGRP-ShardSplitTest]   
  at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
at java.lang.Thread.run(Thread.java:748)4) Thread[id=945, 
name=TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[C8CF5E9EFB1BE270]-EventThread,
 state=WAITING, group=TGRP-ShardSplitTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
5) Thread[id=1037, 
name=TEST-ShardSplitTest.testSplitStaticIndexReplication-seed#[C8CF5E9EFB1BE270]-SendThread(127.0.0.1:35792),
 state=TIMED_WAITING, group=TGRP-ShardSplitTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)   
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)   
 6) Thread[id=1022, 
name=qtp299598619-1022-acceptor-0@3f3be3b5-ServerConnector@2d4acf33{HTTP/1.1,[http/1.1]}{127.0.0.1:36648},
 state=RUNNABLE, group=TGRP-ShardSplitTest] at 
sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) 
at 
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) 
at 
org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:379)   
  at 
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:638)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)7) Thread[id=1056, 
name=Connection evictor, state=TIMED_WAITING, group=TGRP-ShardSplitTest]
 at java.lang.Thread.sleep(Native Method) at 

[jira] [Commented] (SOLR-12139) Support "eq" function for string fields

2018-03-29 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419380#comment-16419380
 ] 

Lucene/Solr QA commented on SOLR-12139:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  0m 51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  0m 51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  0m 51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 52m 
20s{color} | {color:green} core in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12139 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916801/SOLR-12139.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 358e595 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_144 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/28/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/28/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Support "eq" function for string fields
> ---
>
> Key: SOLR-12139
> URL: https://issues.apache.org/jira/browse/SOLR-12139
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Reporter: Andrey Kudryavtsev
>Assignee: Mikhail Khludnev
>Priority: Minor
> Attachments: SOLR-12139.patch, SOLR-12139.patch, SOLR-12139.patch, 
> SOLR-12139.patch
>
>
> I just discovered that {{eq}} user function will work for numeric fields only.
> For string types it results in {{java.lang.UnsupportedOperationException}}
> What do you think if we will extend it to support at least some of string 
> types as well?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-03-29 Thread Aaron Greenspan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419378#comment-16419378
 ] 

Aaron Greenspan commented on SOLR-7896:
---

Here's how I'd like Solr to work. When installing it fresh (no content), the 
first thing you have to do is go to the UI and set an admin password. Once 
you've done that, you should be given a choice to leave your API wide open (how 
it works now, firewalls aside), or generate a security key that in the future 
gets passed to every API request as an HTTP GET variable. If you don't pass the 
key and it's set to be required, the API request fails. If you pass the wrong 
key and it's required, the API request fails. If you pass the right key and 
it's required, or if no key is required, you get results back. You can change 
the security key settings in the admin UI by signing in with your username and 
password. Potentially, you could have different security keys for different use 
cases, and track their usage.

I have no experience as a Solr Java developer so maybe doing this is impossible 
or just merely difficult. But it would bring Solr in line with almost every 
other enterprise software product I've ever used.

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Priority: Major
>  Labels: authentication, login, password
>
> Out of the box, the Solr Administrative interface should require a password 
> that the user is required to set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12162) CorePropertiesLocator Exception message contains a typo when unable to create Solr Core

2018-03-29 Thread Ryan Zwiefelhofer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Zwiefelhofer updated SOLR-12162:
-
Attachment: (was: corepropertieslocator_typo.patch)

> CorePropertiesLocator Exception message contains a typo when unable to create 
> Solr Core
> ---
>
> Key: SOLR-12162
> URL: https://issues.apache.org/jira/browse/SOLR-12162
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ryan Zwiefelhofer
>Priority: Trivial
> Fix For: master (8.0)
>
> Attachments: SOLR-12162_corepropertieslocator_typo.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CorePropertiesLocator has a typo in the SolrException thrown when unable to 
> create a new core. 
> ([https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CorePropertiesLocator.java#L69)]
> There should be a space before the `as` so that the exception message reads 
> correctly.
> Before:
> {{Could not create a new core in /coredescriptor/instancedirectoryas another 
> core is already defined there}}
>                                                                               
>                                                  ^ no space here
>  
> After:
> {{Could not create a new core in /coredescriptor/instancedirectory as another 
> core is already defined there}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12162) CorePropertiesLocator Exception message contains a typo when unable to create Solr Core

2018-03-29 Thread Ryan Zwiefelhofer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Zwiefelhofer updated SOLR-12162:
-
Attachment: SOLR-12162_corepropertieslocator_typo.patch

> CorePropertiesLocator Exception message contains a typo when unable to create 
> Solr Core
> ---
>
> Key: SOLR-12162
> URL: https://issues.apache.org/jira/browse/SOLR-12162
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ryan Zwiefelhofer
>Priority: Trivial
> Fix For: master (8.0)
>
> Attachments: SOLR-12162_corepropertieslocator_typo.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CorePropertiesLocator has a typo in the SolrException thrown when unable to 
> create a new core. 
> ([https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CorePropertiesLocator.java#L69)]
> There should be a space before the `as` so that the exception message reads 
> correctly.
> Before:
> {{Could not create a new core in /coredescriptor/instancedirectoryas another 
> core is already defined there}}
>                                                                               
>                                                  ^ no space here
>  
> After:
> {{Could not create a new core in /coredescriptor/instancedirectory as another 
> core is already defined there}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12162) CorePropertiesLocator Exception message contains a typo when unable to create Solr Core

2018-03-29 Thread Ryan Zwiefelhofer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Zwiefelhofer updated SOLR-12162:
-
Attachment: corepropertieslocator_typo.patch

> CorePropertiesLocator Exception message contains a typo when unable to create 
> Solr Core
> ---
>
> Key: SOLR-12162
> URL: https://issues.apache.org/jira/browse/SOLR-12162
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ryan Zwiefelhofer
>Priority: Trivial
> Fix For: master (8.0)
>
> Attachments: SOLR-12162_corepropertieslocator_typo.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CorePropertiesLocator has a typo in the SolrException thrown when unable to 
> create a new core. 
> ([https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CorePropertiesLocator.java#L69)]
> There should be a space before the `as` so that the exception message reads 
> correctly.
> Before:
> {{Could not create a new core in /coredescriptor/instancedirectoryas another 
> core is already defined there}}
>                                                                               
>                                                  ^ no space here
>  
> After:
> {{Could not create a new core in /coredescriptor/instancedirectory as another 
> core is already defined there}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12162) CorePropertiesLocator Exception message contains a typo when unable to create Solr Core

2018-03-29 Thread Ryan Zwiefelhofer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419340#comment-16419340
 ] 

Ryan Zwiefelhofer commented on SOLR-12162:
--

Created a Pull Request here, https://github.com/apache/lucene-solr/pull/346

> CorePropertiesLocator Exception message contains a typo when unable to create 
> Solr Core
> ---
>
> Key: SOLR-12162
> URL: https://issues.apache.org/jira/browse/SOLR-12162
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ryan Zwiefelhofer
>Priority: Trivial
> Fix For: master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> CorePropertiesLocator has a typo in the SolrException thrown when unable to 
> create a new core. 
> ([https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CorePropertiesLocator.java#L69)]
> There should be a space before the `as` so that the exception message reads 
> correctly.
> Before:
> {{Could not create a new core in /coredescriptor/instancedirectoryas another 
> core is already defined there}}
>                                                                               
>                                                  ^ no space here
>  
> After:
> {{Could not create a new core in /coredescriptor/instancedirectory as another 
> core is already defined there}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #346: SOLR-12162: Fixes typo in CorePropertiesLocat...

2018-03-29 Thread rzwiefel
GitHub user rzwiefel opened a pull request:

https://github.com/apache/lucene-solr/pull/346

SOLR-12162: Fixes typo in CorePropertiesLocator

Fixes a simple typo in the CorePropertiesLocator

https://issues.apache.org/jira/browse/SOLR-12162

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rzwiefel/lucene-solr 
master-fix-corepropertieslocator-typo

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/346.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #346


commit 32464a24a69912d1e022948e772f6deeb64a4979
Author: Ryan Zwiefelhofer 
Date:   2018-03-29T16:42:54Z

SOLR-12162: Fixes typo in CorePropertiesLocator




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12162) CorePropertiesLocator Exception message contains a typo when unable to create Solr Core

2018-03-29 Thread Ryan Zwiefelhofer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Zwiefelhofer updated SOLR-12162:
-
Description: 
CorePropertiesLocator has a typo in the SolrException thrown when unable to 
create a new core. 
([https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CorePropertiesLocator.java#L69)]

There should be a space before the `as` so that the exception message reads 
correctly.

Before:

{{Could not create a new core in /coredescriptor/instancedirectoryas another 
core is already defined there}}

 

After:

{{Could not create a new core in /coredescriptor/instancedirectory as another 
core is already defined there}}

  was:
CorePropertiesLocator has a typo in the SolrException thrown when unable to 
create a new core. 
([https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CorePropertiesLocator.java#L69)]

There should be a space before the `as` so that the exception message reads 
correctly.

Before:

{{Could not create a new core in /coredescriptor/instancedirectoryas another 
core is already defined there}}

 

After:

Could not create a new core in /coredescriptor/instancedirectory as another 
core is already defined there


> CorePropertiesLocator Exception message contains a typo when unable to create 
> Solr Core
> ---
>
> Key: SOLR-12162
> URL: https://issues.apache.org/jira/browse/SOLR-12162
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ryan Zwiefelhofer
>Priority: Trivial
> Fix For: master (8.0)
>
>
> CorePropertiesLocator has a typo in the SolrException thrown when unable to 
> create a new core. 
> ([https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CorePropertiesLocator.java#L69)]
> There should be a space before the `as` so that the exception message reads 
> correctly.
> Before:
> {{Could not create a new core in /coredescriptor/instancedirectoryas another 
> core is already defined there}}
>  
> After:
> {{Could not create a new core in /coredescriptor/instancedirectory as another 
> core is already defined there}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12162) CorePropertiesLocator Exception message contains a typo when unable to create Solr Core

2018-03-29 Thread Ryan Zwiefelhofer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Zwiefelhofer updated SOLR-12162:
-
Description: 
CorePropertiesLocator has a typo in the SolrException thrown when unable to 
create a new core. 
([https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CorePropertiesLocator.java#L69)]

There should be a space before the `as` so that the exception message reads 
correctly.

Before:

{{Could not create a new core in /coredescriptor/instancedirectoryas another 
core is already defined there}}

                                                                                
                                               ^ no space here

 

After:

{{Could not create a new core in /coredescriptor/instancedirectory as another 
core is already defined there}}

  was:
CorePropertiesLocator has a typo in the SolrException thrown when unable to 
create a new core. 
([https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CorePropertiesLocator.java#L69)]

There should be a space before the `as` so that the exception message reads 
correctly.

Before:

{{Could not create a new core in /coredescriptor/instancedirectoryas another 
core is already defined there}}

 

After:

{{Could not create a new core in /coredescriptor/instancedirectory as another 
core is already defined there}}


> CorePropertiesLocator Exception message contains a typo when unable to create 
> Solr Core
> ---
>
> Key: SOLR-12162
> URL: https://issues.apache.org/jira/browse/SOLR-12162
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ryan Zwiefelhofer
>Priority: Trivial
> Fix For: master (8.0)
>
>
> CorePropertiesLocator has a typo in the SolrException thrown when unable to 
> create a new core. 
> ([https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CorePropertiesLocator.java#L69)]
> There should be a space before the `as` so that the exception message reads 
> correctly.
> Before:
> {{Could not create a new core in /coredescriptor/instancedirectoryas another 
> core is already defined there}}
>                                                                               
>                                                  ^ no space here
>  
> After:
> {{Could not create a new core in /coredescriptor/instancedirectory as another 
> core is already defined there}}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1769 - Unstable!

2018-03-29 Thread Steve Rowe
+1

Docs, changes and javadocs look good.

Smoke tester says: SUCCESS! [0:35:05.242678]

--
Steve
www.lucidworks.com

> On Mar 29, 2018, at 9:52 AM, Policeman Jenkins Server  
> wrote:
> 
> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1769/
> Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC
> 
> 11 tests failed.
> FAILED:  org.apache.solr.search.TestRecovery.testBuffering
> 
> Error Message:
> 
> 
> Stack Trace:
> java.lang.NullPointerException
>   at 
> __randomizedtesting.SeedInfo.seed([D6674863F4F03A58:CB89E64855A99B73]:0)
>   at 
> org.apache.solr.search.TestRecovery.testBuffering(TestRecovery.java:495)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
>   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>   at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
>   at java.lang.Thread.run(Thread.java:748)
> 
> 
> FAILED:  org.apache.solr.search.TestRecovery.testRecoveryMultipleLogs
> 
> Error Message:
> mismatch: '6'!='3' @ response/numFound
> 
> Stack Trace:
> 

[jira] [Created] (SOLR-12162) CorePropertiesLocator Exception message contains a typo when unable to create Solr Core

2018-03-29 Thread Ryan Zwiefelhofer (JIRA)
Ryan Zwiefelhofer created SOLR-12162:


 Summary: CorePropertiesLocator Exception message contains a typo 
when unable to create Solr Core
 Key: SOLR-12162
 URL: https://issues.apache.org/jira/browse/SOLR-12162
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ryan Zwiefelhofer
 Fix For: master (8.0)


CorePropertiesLocator has a typo in the SolrException thrown when unable to 
create a new core. 
([https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/core/CorePropertiesLocator.java#L69)]

There should be a space before the `as` so that the exception message reads 
correctly.

Before:

{{Could not create a new core in /coredescriptor/instancedirectoryas another 
core is already defined there}}

 

After:

Could not create a new core in /coredescriptor/instancedirectory as another 
core is already defined there



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12136) Document hl.q parameter

2018-03-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419296#comment-16419296
 ] 

ASF subversion and git services commented on SOLR-12136:


Commit 47849eea7a96e7e1a7fc4fe16f5678582073cf55 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=47849ee ]

SOLR-12136: Docs: Improve hl.fl, hl.q, hl.qparser

(cherry picked from commit 701af06)


> Document hl.q parameter
> ---
>
> Key: SOLR-12136
> URL: https://issues.apache.org/jira/browse/SOLR-12136
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12136.patch
>
>
> *Original issue:
> If I specify:
> hl.fl=f1=something
> then "something" is analyzed against the default field rather than f1
> So in this particular case, f1 did some diacritic folding
> (GermanNormalizationFilterFactory specifically). But my guess is that
> the df was still "text", or at least something that didn't reference
> that filter.
> I'm defining "worked" in what follows is getting highlighting on "Kündigung"
> so
> Kündigung was indexed as Kundigung
> So far so good. Now if I try to highlight on f1
> These work
> q=f1:Kündigung=f1
> q=f1:Kündigung=f1=Kundigung <= NOTE, without umlaut
> q=f1:Kündigung=f1=f1:Kündigung <= NOTE, with umlaut
> This does not work
> q=f1:Kündigung=f1=Kündigung <= NOTE, with umlaut
> Testing this locally, I'd get the highlighting if I defined df as "f1"
> in all the above cases.
> **David Smiley's analysis
> BTW hl.q is parsed by the hl.qparser param which defaults to the defType 
> param which defaults to "lucene".
> In common cases, I think this is a non-issue.  One common case is 
> defType=edismax and you specify a list of fields in 'qf' (thus your query has 
> parts parsed on various fields) and then you set hl.fl to some subset of 
> those fields.  This will use the correct analysis.
> You make a compelling point in terms of what a user might expect -- my gut 
> reaction aligned with your expectation and I thought maybe we should change 
> this.  But it's not as easy at it seems at first blush, and there are bad 
> performance implications.  How do you *generically* tell an arbitrary query 
> parser which field it should parse the string with?  We have no such 
> standard.  And lets say we did; then we'd have to re-parse the query string 
> for each field in hl.fl (and consider hl.fl might be a wildcard!).  Perhaps 
> both solveable or constrainable with yet more parameters, but I'm pessimistic 
> it'll be a better outcome.
> The documentation ought to clarify this matter.  Probably in hl.fl to say 
> that the fields listed are analyzed with that of their field type, and that 
> it ought to be "compatible" (the same or similar) to that which parsed the 
> query.
> Perhaps, like spellcheck's spellcheck.collateParam.* param prefix, 
> highlighting could add a means to specify additional parameters for hl.q to 
> be parsed (not just the choice of query parsers).  This isn't particularly 
> pressing though since this can easily be added to the front of hl.q like 
> hl.q={!edismax qf=$hl.fl v=$q}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12136) Document hl.q parameter

2018-03-29 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419286#comment-16419286
 ] 

ASF subversion and git services commented on SOLR-12136:


Commit 701af06f627be98ddc8db083dc4dd51dbfe4936a in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=701af06 ]

SOLR-12136: Docs: Improve hl.fl, hl.q, hl.qparser


> Document hl.q parameter
> ---
>
> Key: SOLR-12136
> URL: https://issues.apache.org/jira/browse/SOLR-12136
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.4
>
> Attachments: SOLR-12136.patch
>
>
> *Original issue:
> If I specify:
> hl.fl=f1=something
> then "something" is analyzed against the default field rather than f1
> So in this particular case, f1 did some diacritic folding
> (GermanNormalizationFilterFactory specifically). But my guess is that
> the df was still "text", or at least something that didn't reference
> that filter.
> I'm defining "worked" in what follows is getting highlighting on "Kündigung"
> so
> Kündigung was indexed as Kundigung
> So far so good. Now if I try to highlight on f1
> These work
> q=f1:Kündigung=f1
> q=f1:Kündigung=f1=Kundigung <= NOTE, without umlaut
> q=f1:Kündigung=f1=f1:Kündigung <= NOTE, with umlaut
> This does not work
> q=f1:Kündigung=f1=Kündigung <= NOTE, with umlaut
> Testing this locally, I'd get the highlighting if I defined df as "f1"
> in all the above cases.
> **David Smiley's analysis
> BTW hl.q is parsed by the hl.qparser param which defaults to the defType 
> param which defaults to "lucene".
> In common cases, I think this is a non-issue.  One common case is 
> defType=edismax and you specify a list of fields in 'qf' (thus your query has 
> parts parsed on various fields) and then you set hl.fl to some subset of 
> those fields.  This will use the correct analysis.
> You make a compelling point in terms of what a user might expect -- my gut 
> reaction aligned with your expectation and I thought maybe we should change 
> this.  But it's not as easy at it seems at first blush, and there are bad 
> performance implications.  How do you *generically* tell an arbitrary query 
> parser which field it should parse the string with?  We have no such 
> standard.  And lets say we did; then we'd have to re-parse the query string 
> for each field in hl.fl (and consider hl.fl might be a wildcard!).  Perhaps 
> both solveable or constrainable with yet more parameters, but I'm pessimistic 
> it'll be a better outcome.
> The documentation ought to clarify this matter.  Probably in hl.fl to say 
> that the fields listed are analyzed with that of their field type, and that 
> it ought to be "compatible" (the same or similar) to that which parsed the 
> query.
> Perhaps, like spellcheck's spellcheck.collateParam.* param prefix, 
> highlighting could add a means to specify additional parameters for hl.q to 
> be parsed (not just the choice of query parsers).  This isn't particularly 
> pressing though since this can easily be added to the front of hl.q like 
> hl.q={!edismax qf=$hl.fl v=$q}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-03-29 Thread Upayavira (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419281#comment-16419281
 ] 

Upayavira commented on SOLR-7896:
-

Let's just be clear what we are talking about here.

The Admin UI is a set of HTML and JS files.

It makes use of a set of APIs, that are typically JSON over HTTP: the same APIs 
as end users use.

So talking about one auth for the UI and one for the API doesn't entirely make 
sense. Serving the UI files up over a different auth scheme may be possible, 
but without the APIs they are pretty darn useless, no?

So what are we actually talking about here?

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Priority: Major
>  Labels: authentication, login, password
>
> Out of the box, the Solr Administrative interface should require a password 
> that the user is required to set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.3-Linux (64bit/jdk-9.0.4) - Build # 86 - Unstable!

2018-03-29 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.3-Linux/86/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.AliasIntegrationTest.testModifyPropertiesV1

Error Message:
Unexpected status: HTTP/1.1 400 Bad Request

Stack Trace:
java.lang.AssertionError: Unexpected status: HTTP/1.1 400 Bad Request
at 
__randomizedtesting.SeedInfo.seed([9BAA48E91C4731AD:B81BC989212D337A]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AliasIntegrationTest.assertSuccess(AliasIntegrationTest.java:320)
at 
org.apache.solr.cloud.AliasIntegrationTest.testModifyPropertiesV1(AliasIntegrationTest.java:253)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 12520 lines...]
   [junit4] Suite: org.apache.solr.cloud.AliasIntegrationTest
   [junit4]   2> 331739 INFO  
(SUITE-AliasIntegrationTest-seed#[9BAA48E91C4731AD]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom 

[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-03-29 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419250#comment-16419250
 ] 

Shalin Shekhar Mangar commented on SOLR-7896:
-

I agree with Gus here. Ideally, whatever security scheme is enabled for Solr 
APIs, the same should be enabled for the Admin UI. It is a bad idea to have a 
different scheme that is used only by the Admin UI.

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Priority: Major
>  Labels: authentication, login, password
>
> Out of the box, the Solr Administrative interface should require a password 
> that the user is required to set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



VOTE: Apache Solr Reference Guide for Solr 7.3 RC1

2018-03-29 Thread Cassandra Targett
Please vote to release the Apache Solr Reference Guide for Solr 7.3.

The artifacts can be downloaded from:
https://dist.apache.org/repos/dist/dev/lucene/solr/ref-
guide/apache-solr-ref-guide-7.3-RC1/

$ cat apache-solr-ref-guide-7.3.pdf.sha1
151f06d920d1ac41564f3c0ddabae3c2c36b6892  apache-solr-ref-guide-7.3.pdf

The HTML version has also been uploaded to the website:
https://lucene.apache.org/solr/guide/7_3/

Here's my +1.

If it happens that this vote passes before the vote for the final
Lucene/Solr RC is complete, I'll hold release/announcement of the Ref Guide
until the vote is complete and the release steps are finished.

Thanks,
Cassandra


[jira] [Commented] (SOLR-7896) Add a login page for Solr Administrative Interface

2018-03-29 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7896?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419227#comment-16419227
 ] 

Jan Høydahl commented on SOLR-7896:
---

Ok, so some kind of fallback Auth that is disabled by default but can be turned 
on if you need to use a primary Auth not yet natively supported by the AdminUI.
Another option is to allow more than one Auth plugin to be enabled at the same 
time, and let the framework resolve which one to use for each request.

> Add a login page for Solr Administrative Interface
> --
>
> Key: SOLR-7896
> URL: https://issues.apache.org/jira/browse/SOLR-7896
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, security
>Affects Versions: 5.2.1
>Reporter: Aaron Greenspan
>Priority: Major
>  Labels: authentication, login, password
>
> Out of the box, the Solr Administrative interface should require a password 
> that the user is required to set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12156) "" in solrconfig.xml similar to ""

2018-03-29 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419206#comment-16419206
 ] 

Shawn Heisey commented on SOLR-12156:
-

bq. But if a novice developer calls optimize after each commit, I want my 
configurations to protect my index from that.

Use IgnoreCommitOptimizeUpdateProcessorFactory.  I included links to 
documentation above.  For more information about how to use update processors, 
see the Solr reference guide.

bq. I want to configure internally a downtime where the optimize will auto 
initiate

You can add an entry to the scheduling software in your OS to do this.  On 
Linux/Unix, it's usually very easy -- configure an entry in your crontab that 
uses curl or wget to make an HTTP request to initiate the optimize.  There are 
ways to do it on Windows as well.


> "" in solrconfig.xml similar to ""
> ---
>
> Key: SOLR-12156
> URL: https://issues.apache.org/jira/browse/SOLR-12156
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Reporter: Indranil Majumder
>Priority: Major
>
> It would be great to have the below config parameters in solrconfig.xml :-
>  # maxMergedSegmentMB (same as in TieredMergePolicy)
>  # maxOptimizeSegments (same as in solrjClient.optimize( 
> *waitFlush,**waitSearcher* ,{color:#FF}*maxSegments*{color}))
>  # expungeDeletes (true/false)
>  # optimizeTimeout(in minutes)
>  # disableExlicitOptimize (disable all solrClient call to optimize )
>  ## optimizeJobCron - associated to disableExlicitOptimize where optimization 
> can be carried out at a maintenance period defined by the cron expression.
> This in turn needs to be respected by DirectUpdateHandler2 or any other 
> handler trying to optimize the index



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10513) CLONE - ConjunctionSolrSpellChecker wrong check for same string distance

2018-03-29 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419199#comment-16419199
 ] 

Amrit Sarkar commented on SOLR-10513:
-

Final patch uploaded, with proper comments, {{ant precommit}} is not working on 
my system some random JARs not found. Tests added based on recommendation,

> CLONE - ConjunctionSolrSpellChecker wrong check for same string distance
> 
>
> Key: SOLR-10513
> URL: https://issues.apache.org/jira/browse/SOLR-10513
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
>Reporter: Abhishek Kumar Singh
>Assignee: James Dyer
>Priority: Major
> Fix For: 5.5
>
> Attachments: SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch, 
> SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch
>
>
> See ConjunctionSolrSpellChecker.java
> try {
>   if (stringDistance == null) {
> stringDistance = checker.getStringDistance();
>   } else if (stringDistance != checker.getStringDistance()) {
> throw new IllegalArgumentException(
> "All checkers need to use the same StringDistance.");
>   }
> } catch (UnsupportedOperationException uoe) {
>   // ignore
> }
> In line stringDistance != checker.getStringDistance() there is comparing by 
> references. So if you are using 2 or more spellcheckers with same distance 
> algorithm, exception will be thrown anyway.
> *Update:* As of Solr 6.5, this has been changed to 
> *stringDistance.equals(checker.getStringDistance())* .
> However, *LuceneLevenshteinDistance* does not even override equals method. 
> This does not solve the problem yet, because the *default equals* method 
> anyway compares references.
> Hence unable to use *FileBasedSolrSpellChecker* .  
> Moreover, Some check of similar sorts should also be in the init method. So 
> that user does not have to wait for this error during query time. If the 
> spellcheck components have been added *solrconfig.xml* , it should throw 
> error during core-reload itself.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10513) CLONE - ConjunctionSolrSpellChecker wrong check for same string distance

2018-03-29 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10513:

Attachment: SOLR-10513.patch

> CLONE - ConjunctionSolrSpellChecker wrong check for same string distance
> 
>
> Key: SOLR-10513
> URL: https://issues.apache.org/jira/browse/SOLR-10513
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 4.9
>Reporter: Abhishek Kumar Singh
>Assignee: James Dyer
>Priority: Major
> Fix For: 5.5
>
> Attachments: SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch, 
> SOLR-10513.patch, SOLR-10513.patch, SOLR-10513.patch
>
>
> See ConjunctionSolrSpellChecker.java
> try {
>   if (stringDistance == null) {
> stringDistance = checker.getStringDistance();
>   } else if (stringDistance != checker.getStringDistance()) {
> throw new IllegalArgumentException(
> "All checkers need to use the same StringDistance.");
>   }
> } catch (UnsupportedOperationException uoe) {
>   // ignore
> }
> In line stringDistance != checker.getStringDistance() there is comparing by 
> references. So if you are using 2 or more spellcheckers with same distance 
> algorithm, exception will be thrown anyway.
> *Update:* As of Solr 6.5, this has been changed to 
> *stringDistance.equals(checker.getStringDistance())* .
> However, *LuceneLevenshteinDistance* does not even override equals method. 
> This does not solve the problem yet, because the *default equals* method 
> anyway compares references.
> Hence unable to use *FileBasedSolrSpellChecker* .  
> Moreover, Some check of similar sorts should also be in the init method. So 
> that user does not have to wait for this error during query time. If the 
> spellcheck components have been added *solrconfig.xml* , it should throw 
> error during core-reload itself.  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8231) Nori, a Korean analyzer based on mecab-ko-dic

2018-03-29 Thread Jim Ferenczi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419178#comment-16419178
 ] 

Jim Ferenczi commented on LUCENE-8231:
--

Sure I attached a new patch (LUCENE-8231-remap-hangul.patch) that applies the 
remap at build and analyze time. I skipped all entries that are not hangul or 
latin-1 chars to make it easier to test. I must have missed something so thanks 
for testing !

 

> Nori, a Korean analyzer based on mecab-ko-dic
> -
>
> Key: LUCENE-8231
> URL: https://issues.apache.org/jira/browse/LUCENE-8231
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8231-remap-hangul.patch, LUCENE-8231.patch, 
> LUCENE-8231.patch
>
>
> There is a dictionary similar to IPADIC but for Korean called mecab-ko-dic:
> It is available under an Apache license here:
> https://bitbucket.org/eunjeon/mecab-ko-dic
> This dictionary was built with MeCab, it defines a format for the features 
> adapted for the Korean language.
> Since the Kuromoji tokenizer uses the same format for the morphological 
> analysis (left cost + right cost + word cost) I tried to adapt the module to 
> handle Korean with the mecab-ko-dic. I've started with a POC that copies the 
> Kuromoji module and adapts it for the mecab-ko-dic.
> I used the same classes to build and read the dictionary but I had to make 
> some modifications to handle the differences with the IPADIC and Japanese. 
> The resulting binary dictionary takes 28MB on disk, it's bigger than the 
> IPADIC but mainly because the source is bigger and there are a lot of
> compound and inflect terms that define a group of terms and the segmentation 
> that can be applied. 
> I attached the patch that contains this new Korean module called -godori- 
> nori. It is an adaptation of the Kuromoji module so currently
> the two modules don't share any code. I wanted to validate the approach first 
> and check the relevancy of the results. I don't speak Korean so I used the 
> relevancy
> tests that was added for another Korean tokenizer 
> (https://issues.apache.org/jira/browse/LUCENE-4956) and tested the output 
> against mecab-ko which is the official fork of mecab to use the mecab-ko-dic.
> I had to simplify the JapaneseTokenizer, my version removes the nBest output 
> and the decomposition of too long tokens. I also
> modified the handling of whitespaces since they are important in Korean. 
> Whitespaces that appear before a term are attached to that term and this
> information is used to compute a penalty based on the Part of Speech of the 
> token. The penalty cost is a feature added to mecab-ko to handle 
> morphemes that should not appear after a morpheme and is described in the 
> mecab-ko page:
> https://bitbucket.org/eunjeon/mecab-ko
> Ignoring whitespaces is also more inlined with the official MeCab library 
> which attach the whitespaces to the term that follows.
> I also added a decompounder filter that expand the compounds and inflects 
> defined in the dictionary and a part of speech filter similar to the Japanese
> that removes the morpheme that are not useful for relevance (suffix, prefix, 
> interjection, ...). These filters don't play well with the tokenizer if it 
> can 
> output multiple paths (nBest output for instance) so for simplicity I removed 
> this ability and the Korean tokenizer only outputs the best path.
> I compared the result with mecab-ko to confirm that the analyzer is working 
> and ran the relevancy test that is defined in HantecRel.java included
> in the patch (written by Robert for another Korean analyzer). Here are the 
> results:
> ||Analyzer||Index Time||Index Size||MAP(CLASSIC)||MAP(BM25)||MAP(GL2)||
> |Standard|35s|131MB|.007|.1044|.1053|
> |CJK|36s|164MB|.1418|.1924|.1916|
> |Korean|212s|90MB|.1628|.2094|.2078|
> I find the results very promising so I plan to continue to work on this 
> project. I started to extract the part of the code that could be shared with 
> the
> Kuromoji module but I wanted to share the status and this POC first to 
> confirm that this approach is viable. The advantages of using the same model 
> than
> the Japanese analyzer are multiple: we don't have a Korean analyzer at the 
> moment ;), the resulting dictionary is small compared to other libraries that
> use the mecab-ko-dic (the FST takes only 5.4MB) and the Tokenizer prunes the 
> lattice on the fly to select the best path efficiently.
> The dictionary can be built directly from the godori module with the 
> following command:
> ant regenerate (you need to create the resource directory (mkdir 
> lucene/analysis/godori/src/resources/org/apache/lucene/analysis/ko/dict) 
> first since the dictionary is not included in the patch).
> I've also added some minimal tests in the 

[jira] [Updated] (LUCENE-8231) Nori, a Korean analyzer based on mecab-ko-dic

2018-03-29 Thread Jim Ferenczi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi updated LUCENE-8231:
-
Attachment: LUCENE-8231-remap-hangul.patch

> Nori, a Korean analyzer based on mecab-ko-dic
> -
>
> Key: LUCENE-8231
> URL: https://issues.apache.org/jira/browse/LUCENE-8231
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8231-remap-hangul.patch, LUCENE-8231.patch, 
> LUCENE-8231.patch
>
>
> There is a dictionary similar to IPADIC but for Korean called mecab-ko-dic:
> It is available under an Apache license here:
> https://bitbucket.org/eunjeon/mecab-ko-dic
> This dictionary was built with MeCab, it defines a format for the features 
> adapted for the Korean language.
> Since the Kuromoji tokenizer uses the same format for the morphological 
> analysis (left cost + right cost + word cost) I tried to adapt the module to 
> handle Korean with the mecab-ko-dic. I've started with a POC that copies the 
> Kuromoji module and adapts it for the mecab-ko-dic.
> I used the same classes to build and read the dictionary but I had to make 
> some modifications to handle the differences with the IPADIC and Japanese. 
> The resulting binary dictionary takes 28MB on disk, it's bigger than the 
> IPADIC but mainly because the source is bigger and there are a lot of
> compound and inflect terms that define a group of terms and the segmentation 
> that can be applied. 
> I attached the patch that contains this new Korean module called -godori- 
> nori. It is an adaptation of the Kuromoji module so currently
> the two modules don't share any code. I wanted to validate the approach first 
> and check the relevancy of the results. I don't speak Korean so I used the 
> relevancy
> tests that was added for another Korean tokenizer 
> (https://issues.apache.org/jira/browse/LUCENE-4956) and tested the output 
> against mecab-ko which is the official fork of mecab to use the mecab-ko-dic.
> I had to simplify the JapaneseTokenizer, my version removes the nBest output 
> and the decomposition of too long tokens. I also
> modified the handling of whitespaces since they are important in Korean. 
> Whitespaces that appear before a term are attached to that term and this
> information is used to compute a penalty based on the Part of Speech of the 
> token. The penalty cost is a feature added to mecab-ko to handle 
> morphemes that should not appear after a morpheme and is described in the 
> mecab-ko page:
> https://bitbucket.org/eunjeon/mecab-ko
> Ignoring whitespaces is also more inlined with the official MeCab library 
> which attach the whitespaces to the term that follows.
> I also added a decompounder filter that expand the compounds and inflects 
> defined in the dictionary and a part of speech filter similar to the Japanese
> that removes the morpheme that are not useful for relevance (suffix, prefix, 
> interjection, ...). These filters don't play well with the tokenizer if it 
> can 
> output multiple paths (nBest output for instance) so for simplicity I removed 
> this ability and the Korean tokenizer only outputs the best path.
> I compared the result with mecab-ko to confirm that the analyzer is working 
> and ran the relevancy test that is defined in HantecRel.java included
> in the patch (written by Robert for another Korean analyzer). Here are the 
> results:
> ||Analyzer||Index Time||Index Size||MAP(CLASSIC)||MAP(BM25)||MAP(GL2)||
> |Standard|35s|131MB|.007|.1044|.1053|
> |CJK|36s|164MB|.1418|.1924|.1916|
> |Korean|212s|90MB|.1628|.2094|.2078|
> I find the results very promising so I plan to continue to work on this 
> project. I started to extract the part of the code that could be shared with 
> the
> Kuromoji module but I wanted to share the status and this POC first to 
> confirm that this approach is viable. The advantages of using the same model 
> than
> the Japanese analyzer are multiple: we don't have a Korean analyzer at the 
> moment ;), the resulting dictionary is small compared to other libraries that
> use the mecab-ko-dic (the FST takes only 5.4MB) and the Tokenizer prunes the 
> lattice on the fly to select the best path efficiently.
> The dictionary can be built directly from the godori module with the 
> following command:
> ant regenerate (you need to create the resource directory (mkdir 
> lucene/analysis/godori/src/resources/org/apache/lucene/analysis/ko/dict) 
> first since the dictionary is not included in the patch).
> I've also added some minimal tests in the module to play with the analysis.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, 

[jira] [Commented] (SOLR-12161) CloudSolrClient with basic auth enabled will update even if no credentials supplied

2018-03-29 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419139#comment-16419139
 ] 

Erick Erickson commented on SOLR-12161:
---

[~janhoy] re: SOLR-10453. Certainly possible, but I doubt it. I don't know how 
that would explain the fact that the _same_ update request first succeeds 
sending the docs then fails the commit with "auth required".

I'll leave it in Noble's capable hands from here on out...

> CloudSolrClient with basic auth enabled will update even if no credentials 
> supplied
> ---
>
> Key: SOLR-12161
> URL: https://issues.apache.org/jira/browse/SOLR-12161
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 7.3
>Reporter: Erick Erickson
>Assignee: Noble Paul
>Priority: Major
> Attachments: AuthUpdateTest.java
>
>
> This is an offshoot of SOLR-9399. When I was writing a test, if I create a 
> cluster with basic authentication set up, I can _still_ add documents to a 
> collection even without credentials being set in the request.
> However, simple queries, commits etc. all fail without credentials set in the 
> request.
> I'll attach a test class that illustrates the problem.
> If I use a new HttpSolrClient instead of a CloudSolrClient, then the update 
> request fails as expected.
> [~noblepaul] do you have any insight here? Possibly something with splitting 
> up the update request to go to each individual shard?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8231) Nori, a Korean analyzer based on mecab-ko-dic

2018-03-29 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419140#comment-16419140
 ] 

Robert Muir commented on LUCENE-8231:
-

well according to my commit years ago it was "smaller and much faster". But I 
don't remember exactly what the numbers were, only that it was worth the 
trouble. Maybe something wasn't quite right? I remember it being a big 
difference for lookup performance. Do you have a patch for your experiment 
somewhere? i wouldn't mind taking a look to see if it was something silly.

> Nori, a Korean analyzer based on mecab-ko-dic
> -
>
> Key: LUCENE-8231
> URL: https://issues.apache.org/jira/browse/LUCENE-8231
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8231.patch, LUCENE-8231.patch
>
>
> There is a dictionary similar to IPADIC but for Korean called mecab-ko-dic:
> It is available under an Apache license here:
> https://bitbucket.org/eunjeon/mecab-ko-dic
> This dictionary was built with MeCab, it defines a format for the features 
> adapted for the Korean language.
> Since the Kuromoji tokenizer uses the same format for the morphological 
> analysis (left cost + right cost + word cost) I tried to adapt the module to 
> handle Korean with the mecab-ko-dic. I've started with a POC that copies the 
> Kuromoji module and adapts it for the mecab-ko-dic.
> I used the same classes to build and read the dictionary but I had to make 
> some modifications to handle the differences with the IPADIC and Japanese. 
> The resulting binary dictionary takes 28MB on disk, it's bigger than the 
> IPADIC but mainly because the source is bigger and there are a lot of
> compound and inflect terms that define a group of terms and the segmentation 
> that can be applied. 
> I attached the patch that contains this new Korean module called -godori- 
> nori. It is an adaptation of the Kuromoji module so currently
> the two modules don't share any code. I wanted to validate the approach first 
> and check the relevancy of the results. I don't speak Korean so I used the 
> relevancy
> tests that was added for another Korean tokenizer 
> (https://issues.apache.org/jira/browse/LUCENE-4956) and tested the output 
> against mecab-ko which is the official fork of mecab to use the mecab-ko-dic.
> I had to simplify the JapaneseTokenizer, my version removes the nBest output 
> and the decomposition of too long tokens. I also
> modified the handling of whitespaces since they are important in Korean. 
> Whitespaces that appear before a term are attached to that term and this
> information is used to compute a penalty based on the Part of Speech of the 
> token. The penalty cost is a feature added to mecab-ko to handle 
> morphemes that should not appear after a morpheme and is described in the 
> mecab-ko page:
> https://bitbucket.org/eunjeon/mecab-ko
> Ignoring whitespaces is also more inlined with the official MeCab library 
> which attach the whitespaces to the term that follows.
> I also added a decompounder filter that expand the compounds and inflects 
> defined in the dictionary and a part of speech filter similar to the Japanese
> that removes the morpheme that are not useful for relevance (suffix, prefix, 
> interjection, ...). These filters don't play well with the tokenizer if it 
> can 
> output multiple paths (nBest output for instance) so for simplicity I removed 
> this ability and the Korean tokenizer only outputs the best path.
> I compared the result with mecab-ko to confirm that the analyzer is working 
> and ran the relevancy test that is defined in HantecRel.java included
> in the patch (written by Robert for another Korean analyzer). Here are the 
> results:
> ||Analyzer||Index Time||Index Size||MAP(CLASSIC)||MAP(BM25)||MAP(GL2)||
> |Standard|35s|131MB|.007|.1044|.1053|
> |CJK|36s|164MB|.1418|.1924|.1916|
> |Korean|212s|90MB|.1628|.2094|.2078|
> I find the results very promising so I plan to continue to work on this 
> project. I started to extract the part of the code that could be shared with 
> the
> Kuromoji module but I wanted to share the status and this POC first to 
> confirm that this approach is viable. The advantages of using the same model 
> than
> the Japanese analyzer are multiple: we don't have a Korean analyzer at the 
> moment ;), the resulting dictionary is small compared to other libraries that
> use the mecab-ko-dic (the FST takes only 5.4MB) and the Tokenizer prunes the 
> lattice on the fly to select the best path efficiently.
> The dictionary can be built directly from the godori module with the 
> following command:
> ant regenerate (you need to create the resource directory (mkdir 
> lucene/analysis/godori/src/resources/org/apache/lucene/analysis/ko/dict) 
> first since the 

[jira] [Commented] (LUCENE-8231) Nori, a Korean analyzer based on mecab-ko-dic

2018-03-29 Thread Jim Ferenczi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16419121#comment-16419121
 ] 

Jim Ferenczi commented on LUCENE-8231:
--

I tried this approach and generated a new FST with the remap chars. The size of 
the FST after conversion is 4MB + 1MB for the separated Hanja FST which is 
roughly the same size as the FST with the hangul syllab and the Hanja together 
(5.4MB). I also ran the HantecRel indexation and it tooks approximatively 235s 
to build (I tried multiple times and the times were pretty consistent) with 
root caching for the 255 first arcs. That's surprising because it's slower than 
the FST with hangul syllab and root caching (200s) so I wonder if this feature 
is worth the complexity ? I checked the size of the root caching for the 11,171 
syllabs for Hangul and it takes approximatively 250k so that's not bad 
considering that this version is faster.

 

I'll try the compression for compounds now.

> Nori, a Korean analyzer based on mecab-ko-dic
> -
>
> Key: LUCENE-8231
> URL: https://issues.apache.org/jira/browse/LUCENE-8231
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Jim Ferenczi
>Priority: Major
> Attachments: LUCENE-8231.patch, LUCENE-8231.patch
>
>
> There is a dictionary similar to IPADIC but for Korean called mecab-ko-dic:
> It is available under an Apache license here:
> https://bitbucket.org/eunjeon/mecab-ko-dic
> This dictionary was built with MeCab, it defines a format for the features 
> adapted for the Korean language.
> Since the Kuromoji tokenizer uses the same format for the morphological 
> analysis (left cost + right cost + word cost) I tried to adapt the module to 
> handle Korean with the mecab-ko-dic. I've started with a POC that copies the 
> Kuromoji module and adapts it for the mecab-ko-dic.
> I used the same classes to build and read the dictionary but I had to make 
> some modifications to handle the differences with the IPADIC and Japanese. 
> The resulting binary dictionary takes 28MB on disk, it's bigger than the 
> IPADIC but mainly because the source is bigger and there are a lot of
> compound and inflect terms that define a group of terms and the segmentation 
> that can be applied. 
> I attached the patch that contains this new Korean module called -godori- 
> nori. It is an adaptation of the Kuromoji module so currently
> the two modules don't share any code. I wanted to validate the approach first 
> and check the relevancy of the results. I don't speak Korean so I used the 
> relevancy
> tests that was added for another Korean tokenizer 
> (https://issues.apache.org/jira/browse/LUCENE-4956) and tested the output 
> against mecab-ko which is the official fork of mecab to use the mecab-ko-dic.
> I had to simplify the JapaneseTokenizer, my version removes the nBest output 
> and the decomposition of too long tokens. I also
> modified the handling of whitespaces since they are important in Korean. 
> Whitespaces that appear before a term are attached to that term and this
> information is used to compute a penalty based on the Part of Speech of the 
> token. The penalty cost is a feature added to mecab-ko to handle 
> morphemes that should not appear after a morpheme and is described in the 
> mecab-ko page:
> https://bitbucket.org/eunjeon/mecab-ko
> Ignoring whitespaces is also more inlined with the official MeCab library 
> which attach the whitespaces to the term that follows.
> I also added a decompounder filter that expand the compounds and inflects 
> defined in the dictionary and a part of speech filter similar to the Japanese
> that removes the morpheme that are not useful for relevance (suffix, prefix, 
> interjection, ...). These filters don't play well with the tokenizer if it 
> can 
> output multiple paths (nBest output for instance) so for simplicity I removed 
> this ability and the Korean tokenizer only outputs the best path.
> I compared the result with mecab-ko to confirm that the analyzer is working 
> and ran the relevancy test that is defined in HantecRel.java included
> in the patch (written by Robert for another Korean analyzer). Here are the 
> results:
> ||Analyzer||Index Time||Index Size||MAP(CLASSIC)||MAP(BM25)||MAP(GL2)||
> |Standard|35s|131MB|.007|.1044|.1053|
> |CJK|36s|164MB|.1418|.1924|.1916|
> |Korean|212s|90MB|.1628|.2094|.2078|
> I find the results very promising so I plan to continue to work on this 
> project. I started to extract the part of the code that could be shared with 
> the
> Kuromoji module but I wanted to share the status and this POC first to 
> confirm that this approach is viable. The advantages of using the same model 
> than
> the Japanese analyzer are multiple: we don't have a Korean analyzer at the 
> moment ;), the resulting dictionary is small 

  1   2   >