[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-20 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704564#comment-14704564
 ] 

Ramkumar Aiyengar commented on SOLR-6760:
-

In the output above, 'success' new way is 3 instead of 20k, some test bug?

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, 
 SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-20 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704564#comment-14704564
 ] 

Ramkumar Aiyengar edited comment on SOLR-6760 at 8/20/15 9:17 AM:
--

In the output above, 'success' new way is 3 instead of 20k, some test bug? Or 
is it counting with the batching as one op?


was (Author: andyetitmoves):
In the output above, 'success' new way is 3 instead of 20k, some test bug?

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, 
 SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_51) - Build # 5175 - Still Failing!

2015-08-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5175/
Java: 32bit/jdk1.8.0_51 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
[index.20150820083945404, index.20150820083946758, index.properties, 
replication.properties] expected:1 but was:2

Stack Trace:
java.lang.AssertionError: [index.20150820083945404, index.20150820083946758, 
index.properties, replication.properties] expected:1 but was:2
at 
__randomizedtesting.SeedInfo.seed([E93B082DD49:D53E5595B5AAB4FA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.handler.TestReplicationHandler.checkForSingleIndex(TestReplicationHandler.java:818)
at 
org.apache.solr.handler.TestReplicationHandler.doTestStressReplication(TestReplicationHandler.java:785)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: Unit testing our UIs

2015-08-20 Thread Stefan Matheis
I don't know about testing in angular.js but in general there are fancy tools 
for frontend testing:  

http://casperjs.org (http://casperjs.org/)
http://phantomjs.org (http://phantomjs.org/)

angular.js' documentation on testing:

https://docs.angularjs.org/guide/unit-testing
https://docs.angularjs.org/guide/e2e-testing

  As I understand it, we'd need a browser (e.g. Chrome) running in the
  background, which would make it a different kind of test to the rest of
  our test suite.
  

so i'm not sure about a browser really be a requirement :)

-Stefan
  



On Thursday, August 20, 2015 at 5:20 PM, Upayavira wrote:

  
  
 On Thu, Aug 20, 2015, at 04:19 PM, Upayavira wrote:
   
   
  On Thu, Aug 20, 2015, at 03:06 PM, Jan Høydahl wrote:
   Hi

   We’re adding more and more UIs to Solr, and they have no unit tests (as
   far as I know). I could not find any discussions on this topic in the
   list archives, so thought to bring it up here.

   I only know about Selenium, could be cool to write up some simple tests
   exercising key parts of the Admin UI in various browsers. Or?

   
   
  I would love to work out how to test Angular UIs - it has been on my
  todo list for some time.
   
  As I understand it, we'd need a browser (e.g. Chrome) running in the
  background, which would make it a different kind of test to the rest of
  our test suite.
   
  I'm game for working on it. If we can get a single test that works, I
  can start using it more widely across the UI.
   
  
  
 I should say that our AngularUI should be much more testable than the
 existing one, as it is more modular, and can be tested (even without a
 browser) using dependency injection.
  
 Upayavira
  
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
 (mailto:dev-unsubscr...@lucene.apache.org)
 For additional commands, e-mail: dev-h...@lucene.apache.org 
 (mailto:dev-h...@lucene.apache.org)
  
  




[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 297 - Failure

2015-08-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/297/

1 tests failed.
REGRESSION:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.test

Error Message:
There should be 3 documents because there should be two id=1 docs due to 
overwrite=false expected:3 but was:1

Stack Trace:
java.lang.AssertionError: There should be 3 documents because there should be 
two id=1 docs due to overwrite=false expected:3 but was:1
at 
__randomizedtesting.SeedInfo.seed([F2BEC23974BD3686:7AEAFDE3DA415B7E]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testOverwriteOption(CloudSolrClientTest.java:159)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.test(CloudSolrClientTest.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[JENKINS] Lucene-Solr-Tests-5.3-Java7 - Build # 24 - Still Failing

2015-08-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.3-Java7/24/

2 tests failed.
REGRESSION:  org.apache.solr.cloud.TestReplicaProperties.test

Error Message:
No live SolrServers available to handle this request:[https://127.0.0.1:59915, 
https://127.0.0.1:49239, https://127.0.0.1:34262, https://127.0.0.1:53819, 
https://127.0.0.1:53201]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:59915, https://127.0.0.1:49239, 
https://127.0.0.1:34262, https://127.0.0.1:53819, https://127.0.0.1:53201]
at 
__randomizedtesting.SeedInfo.seed([879F03022915695A:FCB3CD887E904A2]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:355)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1085)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:856)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:799)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.ReplicaPropertiesBase.doPropertyAction(ReplicaPropertiesBase.java:51)
at 
org.apache.solr.cloud.TestReplicaProperties.clusterAssignPropertyTest(TestReplicaProperties.java:183)
at 
org.apache.solr.cloud.TestReplicaProperties.test(TestReplicaProperties.java:70)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-4212) Tests should not use new Random() without args

2015-08-20 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704619#comment-14704619
 ] 

Dawid Weiss commented on LUCENE-4212:
-

Well spotted, Mikhail. I think {{Math.random()}} could be banned -- I think 
it's an oversight. {{new Random(long)}} is sometimes handy if you have tight 
loops with super large numbers of repetitions (in which case the randomized 
context's Random may be slow because it runs certain sanity checks).

 Tests should not use new Random() without args
 --

 Key: LUCENE-4212
 URL: https://issues.apache.org/jira/browse/LUCENE-4212
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Robert Muir
 Fix For: 4.0-ALPHA, Trunk

 Attachments: LUCENE-4212.patch, LUCENE-4212.patch, LUCENE-4212.patch, 
 LUCENE-4212.patch


 They should be using random() etc, and if they create one, it should pass in 
 a seed.
 Otherwise, they probably won't reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [VOTE] 5.3.0 RC2

2015-08-20 Thread Uwe Schindler
Hi Noble, did you prepare a release notes for Solr and Lucene in the Wiki 
already? I did not find an announcement about that!

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

 -Original Message-
 From: Noble Paul [mailto:noble.p...@gmail.com]
 Sent: Thursday, August 20, 2015 5:42 AM
 To: Lucene Dev
 Subject: Re: [VOTE] 5.3.0 RC2
 
 Thanks everyone.
 We now have enough votes for the RC2 to be released.
 
 I shall start the process of publishing and releasing this.
 
 On Thu, Aug 20, 2015 at 8:20 AM, Yonik Seeley ysee...@gmail.com wrote:
  +1
 
  -Yonik
 
  On Mon, Aug 17, 2015 at 8:24 AM, Noble Paul noble.p...@gmail.com
 wrote:
  hi all,
  Please vote for the 2nd release candidate for Lucene/Solr 5.3.0
 
  The artifacts can be downloaded from:
  https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.3.0-RC2-r
  ev1696229
 
  You can run the smoke tester directly with this command:
 
  python3 -u dev-tools/scripts/smokeTestRelease.py
  https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-5.3.0-RC2-r
  ev1696229/
 
 
  --
  -
  Noble Paul
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
  additional commands, e-mail: dev-h...@lucene.apache.org
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
  additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
 --
 -
 Noble Paul
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5103) Plugin Improvements

2015-08-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704464#comment-14704464
 ] 

Jan Høydahl commented on SOLR-5103:
---

bq. What happens if a plugin project uses one of the same dependent jars as 
Solr, but packages a wildly different version than the version we package?

I think we don't need to re-invent OSGI to get a better plugin regime for Solr. 
We can document simple requirements for developers to follow.
* Never include libraries or classes that is already part of core Lucene/Solr
* In your {{solrplugin.properties}}, list the Solr version(s) that the plugin 
is tested with (and our tooling could require a {{--force}} option to disregard 
this and install anyway)
* etc

In the first version we can then simply add all jars in the plugin's {{/lib}} 
folder to classloader. Then if a future version of Solr causes trouble for an 
older plugin, the plugin maintainer must release a compatible update. When it 
comes to clashes between different 3rd party plugins we can tackle that with 
more advanced measures when it happens, or plugin developers could treat such 
cases as bugs and provide a fix themselves. For now let's keep it simple.

 Plugin Improvements
 ---

 Key: SOLR-5103
 URL: https://issues.apache.org/jira/browse/SOLR-5103
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Grant Ingersoll
 Fix For: Trunk


 I think for 5.0, we should make it easier to add plugins by defining a plugin 
 package, ala a Hadoop Job jar, which is a self--contained archive of a plugin 
 that can be easily installed (even from the UI!) and configured 
 programmatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-20 Thread Rakesh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704501#comment-14704501
 ] 

Rakesh commented on SOLR-7451:
--

if i create collection with 1 shard and 1 replica, I am getting the same error 
at random interval. Once I encounter this error, Nothing works till i restart 
my Zookeeper and Solr servers..

 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b60) - Build # 13929 - Failure!

2015-08-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13929/
Java: 32bit/jdk1.9.0-ea-b60 -server -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [TransactionLog]
at __randomizedtesting.SeedInfo.seed([1C5D20A41A458EB6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:236)
at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 9816 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2 Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.HttpPartitionTest_1C5D20A41A458EB6-001/init-core-data-001
   [junit4]   2 418769 INFO  
(SUITE-HttpPartitionTest-seed#[1C5D20A41A458EB6]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: 
/ed_yng/
   [junit4]   2 418770 INFO  
(TEST-HttpPartitionTest.test-seed#[1C5D20A41A458EB6]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2 418770 INFO  (Thread-1390) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 418770 INFO  (Thread-1390) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2 418870 INFO  
(TEST-HttpPartitionTest.test-seed#[1C5D20A41A458EB6]) [] 
o.a.s.c.ZkTestServer start zk server on port:57439
   [junit4]   2 418871 INFO  
(TEST-HttpPartitionTest.test-seed#[1C5D20A41A458EB6]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 418871 INFO  
(TEST-HttpPartitionTest.test-seed#[1C5D20A41A458EB6]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 418873 INFO  (zkCallback-1214-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@cf8e7d name:ZooKeeperConnection 
Watcher:127.0.0.1:57439 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2 418873 INFO  
(TEST-HttpPartitionTest.test-seed#[1C5D20A41A458EB6]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 418873 INFO  
(TEST-HttpPartitionTest.test-seed#[1C5D20A41A458EB6]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2 418873 INFO  
(TEST-HttpPartitionTest.test-seed#[1C5D20A41A458EB6]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2 418874 INFO  

Re: VOTE: RC0 Release of apache-solr-ref-guide-5.3.pdf

2015-08-20 Thread Mikhail Khludnev
I dropped both overcomplicated things. Hope it helps.

On Thu, Aug 20, 2015 at 8:35 AM, Mikhail Khludnev 
mkhlud...@griddynamics.com wrote:

 Cassandra,
 page 266  Join Query Parser/ Scoring has broken JIRA Macros, I'm going to
 replace it to url.
 page 198 has links but they are not local (don't refer to page), but url
 aka Nested Child Documents for searching with Block Join Query Parsers. Here
 I'm not sure how to do that.

 On Wed, Aug 19, 2015 at 7:23 PM, Cassandra Targett casstarg...@gmail.com
 wrote:

 Please VOTE to release the following as apache-solr-ref-guide-5.3.pdf.


 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.3-RC0/

 $ cat apache-solr-ref-guide-5.3-RC0/apache-solr-ref-guide-5.3.pdf.sha1

 076fa1cb986a8bc8ac873e65e6ef77a841336221  apache-solr-ref-guide-5.3.pdf


 Thanks,

 Cassandra




 --
 Sincerely yours
 Mikhail Khludnev
 Principal Engineer,
 Grid Dynamics

 http://www.griddynamics.com
 mkhlud...@griddynamics.com




-- 
Sincerely yours
Mikhail Khludnev
Principal Engineer,
Grid Dynamics

http://www.griddynamics.com
mkhlud...@griddynamics.com


[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-20 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704595#comment-14704595
 ] 

Shalin Shekhar Mangar commented on SOLR-6760:
-

Are you looking at success of amILeader op? Look at the state operations. Those 
are all 20k.

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, 
 SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7451) ”Not enough nodes to handle the request“ when inserting data to solrcloud

2015-08-20 Thread Guido (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704548#comment-14704548
 ] 

Guido commented on SOLR-7451:
-

Hi Erick,

Unfortunately I am extremely busy in these days, but as soon as possible I will 
give a try with the custom jar and I will let you know.

Kind Regards,

Guido

 ”Not enough nodes to handle the request“ when inserting data to solrcloud
 -

 Key: SOLR-7451
 URL: https://issues.apache.org/jira/browse/SOLR-7451
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 5.1
Reporter: laigood

 I use solr5.1.0 and deploy one node with solrcloud,and create a collection 
 with 1 shard and 2 replica,when i use solrj to insert data,it throw ”Not 
 enough nodes to handle the request“,but if i create collection with 1 shard 
 and 1 replica,it can insert successfully,also i create another replica with 
 admin api,it still work fine,no longer throw that exception
 the full exception stack
 Exception in thread main org.apache.solr.client.solrj.SolrServerException: 
 org.apache.solr.common.SolrException: Not enough nodes to handle the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:929)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:922)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
   at 
 org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
   at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
 Caused by: org.apache.solr.common.SolrException: Not enough nodes to handle 
 the request
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1052)
   at 
 org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
   ... 10 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-20 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704610#comment-14704610
 ] 

Ramkumar Aiyengar commented on SOLR-6760:
-

Ah, got it, my bad..

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, 
 SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4212) Tests should not use new Random() without args

2015-08-20 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704612#comment-14704612
 ] 

Mikhail Khludnev commented on LUCENE-4212:
--

[~thetaphi] I wonder why Math.random() isn't banned by forbiddenApi? Does 
RandomizedTestRuner propagate seed there and prevents calling new Random()?  

 Tests should not use new Random() without args
 --

 Key: LUCENE-4212
 URL: https://issues.apache.org/jira/browse/LUCENE-4212
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Robert Muir
 Fix For: 4.0-ALPHA, Trunk

 Attachments: LUCENE-4212.patch, LUCENE-4212.patch, LUCENE-4212.patch, 
 LUCENE-4212.patch


 They should be using random() etc, and if they create one, it should pass in 
 a seed.
 Otherwise, they probably won't reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6747) FingerprintFilter - a TokenFilter for clustering/linking purposes

2015-08-20 Thread Mark Harwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Harwood updated LUCENE-6747:
-
Attachment: fingerprintv2.patch

Thanks for taking a look, Adrien.
Added a v2 patch with following changes:

1) added call to input.end() to get final offset state
2) final state is retained using captureState()  
3) Added a FingerprintFilterFactory class
 
As for the alternative hashing idea :
For speed reasons this would be a nice idea but reduces the read-ability of 
results if you want to debug any collisions or otherwise display connections.

For compactness reasons (storing in doc values etc) it would always be possible 
to chain a conventional hashing algo in a TokenFilter on the end of this 
text-normalizing filter. (Do we already have a conventional hashing 
TokenFilter?)




 FingerprintFilter - a TokenFilter for clustering/linking purposes
 -

 Key: LUCENE-6747
 URL: https://issues.apache.org/jira/browse/LUCENE-6747
 Project: Lucene - Core
  Issue Type: New Feature
  Components: modules/analysis
Reporter: Mark Harwood
Priority: Minor
 Attachments: fingerprintv1.patch, fingerprintv2.patch


 A TokenFilter that emits a single token which is a sorted, de-duplicated set 
 of the input tokens.
 This approach to normalizing text is used in tools like OpenRefine[1] and 
 elsewhere [2] to help in clustering or linking texts.
 The implementation proposed here has a an upper limit on the size of the 
 combined token which is output.
 [1] https://github.com/OpenRefine/OpenRefine/wiki/Clustering-In-Depth
 [2] 
 https://rajmak.wordpress.com/2013/04/27/clustering-text-map-reduce-in-python/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_51) - Build # 5179 - Still Failing!

2015-08-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5179/
Java: 64bit/jdk1.8.0_51 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ERROR: SolrIndexSearcher opens=51 closes=50

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50
at __randomizedtesting.SeedInfo.seed([1720C2ED9F6C9F53]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:467)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:233)
at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=6932, name=searcherExecutor-2732-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=6932, name=searcherExecutor-2732-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([1720C2ED9F6C9F53]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=6932, 

[JENKINS] Lucene-Solr-Tests-5.3-Java7 - Build # 26 - Failure

2015-08-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.3-Java7/26/

No tests ran.

Build Log:
[...truncated 49637 lines...]


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7836) Possible deadlock when closing refcounted index writers.

2015-08-20 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705966#comment-14705966
 ] 

Erick Erickson commented on SOLR-7836:
--

FWIW, 96 runs each (look, 96 divides by 6 processors evenly, OK?) and both 
StressTestReorder and TestReloadDeadlock seem happy, along with precommit. 
Running full test suite now.

 Possible deadlock when closing refcounted index writers.
 

 Key: SOLR-7836
 URL: https://issues.apache.org/jira/browse/SOLR-7836
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson
 Fix For: Trunk, 5.4

 Attachments: SOLR-7836-reorg.patch, SOLR-7836-synch.patch, 
 SOLR-7836.patch, SOLR-7836.patch, SOLR-7836.patch, SOLR-7836.patch, 
 deadlock_3.res.zip, deadlock_5_pass_iw.res.zip, deadlock_test


 Preliminary patch for what looks like a possible race condition between 
 writerFree and pauseWriter in DefaultSorlCoreState.
 Looking for comments and/or why I'm completely missing the boat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-20 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6699:

Attachment: LUCENE-6699.patch

It did correct the problem.  Attached a new patch.


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 770 - Still Failing

2015-08-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/770/

1 tests failed.
REGRESSION:  
org.apache.lucene.search.similarities.TestSimilarity2.testNoFieldSkew

Error Message:
expected:0.21697770059108734 but was:0.0

Stack Trace:
java.lang.AssertionError: expected:0.21697770059108734 but was:0.0
at 
__randomizedtesting.SeedInfo.seed([216F1F3C38561BEF:89FA6BFD1F67BF87]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.lucene.search.similarities.TestSimilarity2.testNoFieldSkew(TestSimilarity2.java:198)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 1143 lines...]
   [junit4] Suite: org.apache.lucene.search.similarities.TestSimilarity2
   [junit4]   2 NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestSimilarity2 
-Dtests.method=testNoFieldSkew -Dtests.seed=216F1F3C38561BEF 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=es_ES 

[jira] [Commented] (LUCENE-4212) Tests should not use new Random() without args

2015-08-20 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704687#comment-14704687
 ] 

Dawid Weiss commented on LUCENE-4212:
-

 Also randomisation framework should print/log randomisation seed

It surely does. And had it long before JDK. 

 Tests should not use new Random() without args
 --

 Key: LUCENE-4212
 URL: https://issues.apache.org/jira/browse/LUCENE-4212
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Robert Muir
 Fix For: 4.0-ALPHA, Trunk

 Attachments: LUCENE-4212.patch, LUCENE-4212.patch, LUCENE-4212.patch, 
 LUCENE-4212.patch


 They should be using random() etc, and if they create one, it should pass in 
 a seed.
 Otherwise, they probably won't reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7775) support SolrCloud collection as fromIndex param in query-time join

2015-08-20 Thread Andrei Beliakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Beliakov updated SOLR-7775:
--
Attachment: SOLR-7775.patch

Here is the patch. The code which obtains core name was extracted into 
ScoreJoinQParserPlugin. It might not be the best approach, I'm open to your 
suggestions. Test coverage is provided in DistibJoinFromCollectionTest.

 support SolrCloud collection as fromIndex param in query-time join
 --

 Key: SOLR-7775
 URL: https://issues.apache.org/jira/browse/SOLR-7775
 Project: Solr
  Issue Type: Sub-task
  Components: query parsers
Reporter: Mikhail Khludnev
Assignee: Mikhail Khludnev
 Fix For: 5.3

 Attachments: SOLR-7775.patch


 it's allusion to SOLR-4905, will be addressed right after SOLR-6234



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7775) support SolrCloud collection as fromIndex param in query-time join

2015-08-20 Thread Andrei Beliakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Beliakov updated SOLR-7775:
--
Attachment: (was: SOLR-7775.patch)

 support SolrCloud collection as fromIndex param in query-time join
 --

 Key: SOLR-7775
 URL: https://issues.apache.org/jira/browse/SOLR-7775
 Project: Solr
  Issue Type: Sub-task
  Components: query parsers
Reporter: Mikhail Khludnev
Assignee: Mikhail Khludnev
 Fix For: 5.3


 it's allusion to SOLR-4905, will be addressed right after SOLR-6234



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7949) Thers is a xss issue in plugins/stats page of Admin Web UI.

2015-08-20 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-7949:
--
Fix Version/s: 5.3.1
   5.4
   Trunk

 Thers is a xss issue in plugins/stats page of Admin Web UI.
 ---

 Key: SOLR-7949
 URL: https://issues.apache.org/jira/browse/SOLR-7949
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.9, 4.10.4, 5.2.1
Reporter: davidchiu
 Fix For: Trunk, 5.4, 5.3.1


 Open Solr Admin Web UI, select a core(such as collection1) and then click 
 Plugins/stats,and type a url like 
 http://127.0.0.1:8983/solr/#/collection1/plugins/cache?entry=score=img 
 src=1 onerror=alert(1); to the browser address, you will get alert box with 
 1.
 I changed follow code to resolve this problem:
 The Original code:
   for( var i = 0; i  entry_count; i++ )
   {
 $( 'a[data-bean=' + entries[i] + ']', frame_element )
   .parent().addClass( 'expanded' );
   }
 The Changed code:
   for( var i = 0; i  entry_count; i++ )
   {
 $( 'a[data-bean=' + entries[i].esc() + ']', frame_element )
   .parent().addClass( 'expanded' );
   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7021) Leader will not publish core as active without recovering first, but never recovers

2015-08-20 Thread Adrian Fitzpatrick (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704663#comment-14704663
 ] 

Adrian Fitzpatrick commented on SOLR-7021:
--

Also have seen this issue on Solr 4.10.3, on a 3 node cluster. Issue affected 
one of 3 collections only, and each of the 3 collections configured with 5 
shards and 3 replicas. In the affected collection, for each of the 5 shards, 
the leader was on the same node (hadoopnode02) and was showing as down for all 
shards. Other replicas for each shard were reporting that were waiting for 
leader (eg I was asked to wait on state recovering for shard3 in 
the_collection_20150818161800 on hadoopnode01:8983_solr but I still do not see 
the requested state. I see state: recovering live:true leader from ZK: 
http://hadoopnode02:8983/solr/the_collection_20150818161800_shard3_replica2;)

Something like the work-around suggested by Andrey worked - we shut down the 
whole cluster, brought back up all nodes except the one which was reporting 
leader errors (hadoopnode02). This seemed to trigger a leader election but 
without a quorum. Then brought up hadoopnode02 - election then completed 
successfully and cluster state returned to normal.




 Leader will not publish core as active without recovering first, but never 
 recovers
 ---

 Key: SOLR-7021
 URL: https://issues.apache.org/jira/browse/SOLR-7021
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10
Reporter: James Hardwick
Priority: Critical
  Labels: recovery, solrcloud, zookeeper

 A little background: 1 core solr-cloud cluster across 3 nodes, each with its 
 own shard and each shard with a single replica hence each replica is itself a 
 leader. 
 For reasons we won't get into, we witnessed a shard go down in our cluster. 
 We restarted the cluster but our core/shards still did not come back up. 
 After inspecting the logs, we found this:
 {code}
 015-01-21 15:51:56,494 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
 - We are http://xxx.xxx.xxx.35:8081/solr/xyzcore/ and leader is 
 http://xxx.xxx.xxx.35:8081/solr/xyzcore/
 2015-01-21 15:51:56,496 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
 - No LogReplay needed for core=xyzcore baseURL=http://xxx.xxx.xxx.35:8081/solr
 2015-01-21 15:51:56,496 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
 - I am the leader, no recovery necessary
 2015-01-21 15:51:56,496 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
 - publishing core=xyzcore state=active collection=xyzcore
 2015-01-21 15:51:56,497 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
 - numShards not found on descriptor - reading it from system property
 2015-01-21 15:51:56,498 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
 - publishing core=xyzcore state=down collection=xyzcore
 2015-01-21 15:51:56,498 [coreZkRegister-1-thread-2] INFO  cloud.ZkController  
 - numShards not found on descriptor - reading it from system property
 2015-01-21 15:51:56,501 [coreZkRegister-1-thread-2] ERROR core.ZkContainer  - 
 :org.apache.solr.common.SolrException: Cannot publish state of core 'xyzcore' 
 as active without recovering first!
   at org.apache.solr.cloud.ZkController.publish(ZkController.java:1075)
 {code}
 And at this point the necessary shards never recover correctly and hence our 
 core never returns to a functional state. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7775) support SolrCloud collection as fromIndex param in query-time join

2015-08-20 Thread Andrei Beliakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Beliakov updated SOLR-7775:
--
Attachment: SOLR-7775.patch

 support SolrCloud collection as fromIndex param in query-time join
 --

 Key: SOLR-7775
 URL: https://issues.apache.org/jira/browse/SOLR-7775
 Project: Solr
  Issue Type: Sub-task
  Components: query parsers
Reporter: Mikhail Khludnev
Assignee: Mikhail Khludnev
 Fix For: 5.3

 Attachments: SOLR-7775.patch


 it's allusion to SOLR-4905, will be addressed right after SOLR-6234



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4212) Tests should not use new Random() without args

2015-08-20 Thread Lev Priima (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704685#comment-14704685
 ] 

Lev Priima commented on LUCENE-4212:


Also randomisation framework should print/log randomisation seed and be able to 
initialise random seed from system property (or file, or etc) to simplify test 
failures reproduction .

As it's made here:
http://hg.openjdk.java.net/jdk9/dev/hotspot/file/6f56da5908e6/test/testlibrary/jdk/test/lib/Utils.java#l357

 Tests should not use new Random() without args
 --

 Key: LUCENE-4212
 URL: https://issues.apache.org/jira/browse/LUCENE-4212
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Robert Muir
 Fix For: 4.0-ALPHA, Trunk

 Attachments: LUCENE-4212.patch, LUCENE-4212.patch, LUCENE-4212.patch, 
 LUCENE-4212.patch


 They should be using random() etc, and if they create one, it should pass in 
 a seed.
 Otherwise, they probably won't reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7949) Thers is a xss issue in plugins/stats page of Admin Web UI.

2015-08-20 Thread davidchiu (JIRA)
davidchiu created SOLR-7949:
---

 Summary: Thers is a xss issue in plugins/stats page of Admin Web 
UI.
 Key: SOLR-7949
 URL: https://issues.apache.org/jira/browse/SOLR-7949
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 5.2.1, 4.10.4, 4.9
Reporter: davidchiu


Open Solr Admin Web UI, select a core(such as collection1) and then click 
Plugins/stats,and type a url like 
http://127.0.0.1:8983/solr/#/collection1/plugins/cache?entry=score=img src=1 
onerror=alert(1); to the browser address, you will get alert box with 1.

I changed follow code to resolve this problem:
The Original code:
  for( var i = 0; i  entry_count; i++ )
  {
$( 'a[data-bean=' + entries[i] + ']', frame_element )
  .parent().addClass( 'expanded' );
  }

The Changed code:
  for( var i = 0; i  entry_count; i++ )
  {
$( 'a[data-bean=' + entries[i].esc() + ']', frame_element )
  .parent().addClass( 'expanded' );
  }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7888) Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a BooleanQuery filter parameter available in Solr

2015-08-20 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated SOLR-7888:
-
Attachment: (was: SOLR-7888.patch)

 Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a 
 BooleanQuery filter parameter available in Solr
 --

 Key: SOLR-7888
 URL: https://issues.apache.org/jira/browse/SOLR-7888
 Project: Solr
  Issue Type: New Feature
  Components: Suggester
Affects Versions: 5.2.1
Reporter: Arcadius Ahouansou
Assignee: Jan Høydahl
 Fix For: 5.4

 Attachments: SOLR-7888.patch, SOLR-7888.patch, SOLR-7888.patch


  LUCENE-6464 has introduced a very flexible lookup method that takes as 
 parameter a BooleanQuery that is used for filtering results.
 This ticket is to expose that method to Solr.
 This would allow user to do:
 {code}
 /suggest?suggest=truesuggest.build=truesuggest.q=termsuggest.contextFilterQuery=contexts:tennis
 /suggest?suggest=truesuggest.build=truesuggest.q=termsuggest.contextFilterQuery=contexts:golf
  AND contexts:football
 {code}
 etc
 Given that the context filtering in currently only implemented by the 
 {code}AnalyzingInfixSuggester{code} and by the 
 {code}BlendedInfixSuggester{code}, this initial implementation will support 
 only these 2 lookup implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7888) Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a BooleanQuery filter parameter available in Solr

2015-08-20 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated SOLR-7888:
-
Attachment: SOLR-7888.patch


Hello [~janhoy]

Thank you very much for your comments.

Have uploaded new version of the patch.

{quote}Perhaps this property should be moved to some other Lucene class as a 
common global name for context field for all analyzers that supports context 
filtering,...{quote}
I agree and I moved {{CONTEXTS_FIELD_NAME}} into Lucene's {{Lookup.java}}, 
meaning that it is now available to all Lookup implemetations.

{quote}Regarding a request including suggesters that do not support filtering, 
I think it depends on its data whether the correct thing is to return an 
unfiltered response (open data) or to return nothing (sensitive data). Of 
course, the application has the power to pass suggest.dictionary accordingly if 
it knows that filtering is done. Alternatively, some 
suggest.returnUnFilteredSuggestionsIfFilteringIsNotSupported param to control 
it, I don't know...{quote}
Not quite convinced about this:
Let's take the current solr 5.2.1: 
passing {{suggest.q=termsuggest.contestFilterQuery=ctx1}} will just return all 
suggestions matching the {{term}} ignoring {{ctx1}} as context filtering is not 
yet implemented.

I believe that keeping that behaviour for Lucene Suggesters that have not yet 
implemented context makes more sense to me.

In case a user need context filtering to happen on a Lucene suggester not yet 
supporting filtering, they just need to implement it.

Ideally and eventually, we will have context support for all Lucene suggesters, 
so I am not quite sure whether  
{{suggest.returnUnFilteredSuggestionsIfFilteringIsNotSupported}} is the way we 
should go. 

{quote}
I think that if CONTEXT_ANALYZER_FIELD_TYPE is explicitly given and wrong, we 
should fail-fast and throw exception instead of falling back to 
DocumentDictionaryFactory.CONTEXT_FIELD{quote}
I had thought a bit more about this.
I believe that we do not really need the {{CONTEXT_ANALYZER_FIELD_TYPE}} 
config. 
One just needs to configure the context field in {{schema.xml}} with the needed 
query and index analyzers and all should work.
In case one needs different context analyzers for different suggesters, we just 
need to configure different context fields in {{schema.xml}}.
This has several advantages:
- Simpler/less configuration.
- Cleaner/more readable/less code to maintain.

In case I am missing any use-case, please let me know

{quote}
Will let others chime in on the param names too. Which one do you like the best?
suggest.contextFilterQuery
suggest.contextQ
suggest.fq
suggest.context.fq
{quote}
The param name is this latest patch is still {{suggest.contextFilterQuery}} as 
we have not agreed yet on the right name to adopt.
Maybe [~rcmuir ] or [~shalinmangar] or [~varunthacker]  could help here

 Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a 
 BooleanQuery filter parameter available in Solr
 --

 Key: SOLR-7888
 URL: https://issues.apache.org/jira/browse/SOLR-7888
 Project: Solr
  Issue Type: New Feature
  Components: Suggester
Affects Versions: 5.2.1
Reporter: Arcadius Ahouansou
Assignee: Jan Høydahl
 Fix For: 5.4

 Attachments: SOLR-7888.patch, SOLR-7888.patch, SOLR-7888.patch


  LUCENE-6464 has introduced a very flexible lookup method that takes as 
 parameter a BooleanQuery that is used for filtering results.
 This ticket is to expose that method to Solr.
 This would allow user to do:
 {code}
 /suggest?suggest=truesuggest.build=truesuggest.q=termsuggest.contextFilterQuery=contexts:tennis
 /suggest?suggest=truesuggest.build=truesuggest.q=termsuggest.contextFilterQuery=contexts:golf
  AND contexts:football
 {code}
 etc
 Given that the context filtering in currently only implemented by the 
 {code}AnalyzingInfixSuggester{code} and by the 
 {code}BlendedInfixSuggester{code}, this initial implementation will support 
 only these 2 lookup implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-20 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705956#comment-14705956
 ] 

Karl Wright commented on LUCENE-6699:
-

It looks like a precision error again.  The circle is quite small (radius about 
1e-6), and the computed bounds lie *just* inside the circle's provided edge 
point (distance, 2e-11).  I'll try increasing MINIMUM_RESOLUTION just a bit 
more to see if that corrects the problem.

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6689) Odd analysis problem with WDF, appears to be triggered by preceding analysis components

2015-08-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705977#comment-14705977
 ] 

Shawn Heisey commented on LUCENE-6689:
--

I have just found a better workaround:  The luceneMatchVersion can be specified 
on each analysis component, so I can apply it *only* to the 
WordDelimiterFilterFactory on the index analysis.

I hope this problem will still be fixed.


 Odd analysis problem with WDF, appears to be triggered by preceding analysis 
 components
 ---

 Key: LUCENE-6689
 URL: https://issues.apache.org/jira/browse/LUCENE-6689
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Shawn Heisey

 This problem shows up for me in Solr, but I believe the issue is down at the 
 Lucene level, so I've opened the issue in the LUCENE project.  We can move it 
 if necessary.
 I've boiled the problem down to this minimum Solr fieldType:
 {noformat}
 fieldType name=testType class=solr.TextField
 sortMissingLast=true positionIncrementGap=100
   analyzer type=index
 tokenizer
 class=org.apache.lucene.analysis.icu.segmentation.ICUTokenizerFactory
 rulefiles=Latn:Latin-break-only-on-whitespace.rbbi/
 filter class=solr.PatternReplaceFilterFactory
   pattern=^(\p{Punct}*)(.*?)(\p{Punct}*)$
   replacement=$2
 /
 filter class=solr.WordDelimiterFilterFactory
   splitOnCaseChange=1
   splitOnNumerics=1
   stemEnglishPossessive=1
   generateWordParts=1
   generateNumberParts=1
   catenateWords=1
   catenateNumbers=1
   catenateAll=0
   preserveOriginal=1
 /
   /analyzer
   analyzer type=query
 tokenizer
 class=org.apache.lucene.analysis.icu.segmentation.ICUTokenizerFactory
 rulefiles=Latn:Latin-break-only-on-whitespace.rbbi/
 filter class=solr.PatternReplaceFilterFactory
   pattern=^(\p{Punct}*)(.*?)(\p{Punct}*)$
   replacement=$2
 /
 filter class=solr.WordDelimiterFilterFactory
   splitOnCaseChange=1
   splitOnNumerics=1
   stemEnglishPossessive=1
   generateWordParts=1
   generateNumberParts=1
   catenateWords=0
   catenateNumbers=0
   catenateAll=0
   preserveOriginal=0
 /
   /analyzer
 /fieldType
 {noformat}
 On Solr 4.7, if this type is given the input aaa-bbb: ccc then index 
 analysis puts aaa at term position 1 and bbb at term position 2.  This seems 
 perfectly reasonable to me.  In Solr 4.9, both terms end up at position 2.  
 This causes phrase queries which used to work to return zero hits.  The exact 
 text of the phrase query is in the original documents that match on 4.7.
 If the custom rbbi (which is included unmodified from the lucene icu analysis 
 source code) is not used, then the problem doesn't happen, because the 
 punctuation doesn't make it to the PRF.  If the PatternReplaceFilterFactory 
 is not present, then the problem doesn't happen.
 I can work around the problem by setting luceneMatchVersion to 4.7, but I 
 think the behavior is a bug, and I would rather not continue to use 4.7 
 analysis when I upgrade to 5.x, which I hope to do soon.
 Whether luceneMatchversion is LUCENE_47 or LUCENE_4_9, query analysis puts 
 aaa at term position 1 and bbb at term position 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705978#comment-14705978
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1696880 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1696880 ]

LUCENE-6699: bump up MINIMUM_RESOLUTION some more

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7888) Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a BooleanQuery filter parameter available in Solr

2015-08-20 Thread Arcadius Ahouansou (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705925#comment-14705925
 ] 

Arcadius Ahouansou edited comment on SOLR-7888 at 8/20/15 10:53 PM:


Hello [~janhoy]

Thank you very much for your comments.

Have uploaded new version of the patch.

{quote}Perhaps this property should be moved to some other Lucene class as a 
common global name for context field for all analyzers that supports context 
filtering,...{quote}
I agree and I moved {{CONTEXTS_FIELD_NAME}} into Lucene's {{Lookup.java}}, 
meaning that it is now available to all Lookup implemetations.

{quote}Regarding a request including suggesters that do not support filtering, 
I think it depends on its data whether the correct thing is to return an 
unfiltered response (open data) or to return nothing (sensitive data). Of 
course, the application has the power to pass suggest.dictionary accordingly if 
it knows that filtering is done. Alternatively, some 
suggest.returnUnFilteredSuggestionsIfFilteringIsNotSupported param to control 
it, I don't know...{quote}
Not quite sure about this:
Let's take the current solr 5.2.1: 
passing {{suggest.q=termsuggest.contestFilterQuery=ctx1}} will return all 
suggestions matching the {{term}} ignoring {{ctx1}} as context filtering is not 
yet implemented.

I believe that keeping that behaviour for Lucene Suggesters that have not yet 
implemented context makes more sense to me.

In case a user need context filtering to happen on a Lucene suggester not yet 
supporting filtering, they just need to implement it.

Ideally and eventually, we will have context support for all Lucene suggesters, 
so I am not quite sure whether  
{{suggest.returnUnFilteredSuggestionsIfFilteringIsNotSupported}} is the way we 
should go. 

{quote}
I think that if CONTEXT_ANALYZER_FIELD_TYPE is explicitly given and wrong, we 
should fail-fast and throw exception instead of falling back to 
DocumentDictionaryFactory.CONTEXT_FIELD{quote}
I had thought a bit more about this.
I believe that we do not really need the {{CONTEXT_ANALYZER_FIELD_TYPE}} 
config. 
One just needs to configure the context field in {{schema.xml}} with the needed 
query and index analyzers and all should work.
In case one needs different context analyzers for different suggesters, we just 
need to configure different context fields in {{schema.xml}}.
This has several advantages:
- Simpler/less configuration.
- Cleaner/more readable/less code to maintain.

In case I am missing any use-case, please let me know

{quote}
Will let others chime in on the param names too. Which one do you like the best?
suggest.contextFilterQuery
suggest.contextQ
suggest.fq
suggest.context.fq
{quote}
The param name is this latest patch is still {{suggest.contextFilterQuery}} as 
we have not agreed yet on the right name to adopt.
Maybe [~rcmuir ] or [~shalinmangar] or [~varunthacker]  could help here


was (Author: arcadius):

Hello [~janhoy]

Thank you very much for your comments.

Have uploaded new version of the patch.

{quote}Perhaps this property should be moved to some other Lucene class as a 
common global name for context field for all analyzers that supports context 
filtering,...{quote}
I agree and I moved {{CONTEXTS_FIELD_NAME}} into Lucene's {{Lookup.java}}, 
meaning that it is now available to all Lookup implemetations.

{quote}Regarding a request including suggesters that do not support filtering, 
I think it depends on its data whether the correct thing is to return an 
unfiltered response (open data) or to return nothing (sensitive data). Of 
course, the application has the power to pass suggest.dictionary accordingly if 
it knows that filtering is done. Alternatively, some 
suggest.returnUnFilteredSuggestionsIfFilteringIsNotSupported param to control 
it, I don't know...{quote}
Not quite convinced about this:
Let's take the current solr 5.2.1: 
passing {{suggest.q=termsuggest.contestFilterQuery=ctx1}} will just return all 
suggestions matching the {{term}} ignoring {{ctx1}} as context filtering is not 
yet implemented.

I believe that keeping that behaviour for Lucene Suggesters that have not yet 
implemented context makes more sense to me.

In case a user need context filtering to happen on a Lucene suggester not yet 
supporting filtering, they just need to implement it.

Ideally and eventually, we will have context support for all Lucene suggesters, 
so I am not quite sure whether  
{{suggest.returnUnFilteredSuggestionsIfFilteringIsNotSupported}} is the way we 
should go. 

{quote}
I think that if CONTEXT_ANALYZER_FIELD_TYPE is explicitly given and wrong, we 
should fail-fast and throw exception instead of falling back to 
DocumentDictionaryFactory.CONTEXT_FIELD{quote}
I had thought a bit more about this.
I believe that we do not really need the {{CONTEXT_ANALYZER_FIELD_TYPE}} 
config. 
One just needs to configure the 

[jira] [Updated] (SOLR-7888) Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a BooleanQuery filter parameter available in Solr

2015-08-20 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated SOLR-7888:
-
Attachment: SOLR-7888.patch

 Make Lucene's AnalyzingInfixSuggester.lookup() method that takes a 
 BooleanQuery filter parameter available in Solr
 --

 Key: SOLR-7888
 URL: https://issues.apache.org/jira/browse/SOLR-7888
 Project: Solr
  Issue Type: New Feature
  Components: Suggester
Affects Versions: 5.2.1
Reporter: Arcadius Ahouansou
Assignee: Jan Høydahl
 Fix For: 5.4

 Attachments: SOLR-7888.patch, SOLR-7888.patch, SOLR-7888.patch, 
 SOLR-7888.patch


  LUCENE-6464 has introduced a very flexible lookup method that takes as 
 parameter a BooleanQuery that is used for filtering results.
 This ticket is to expose that method to Solr.
 This would allow user to do:
 {code}
 /suggest?suggest=truesuggest.build=truesuggest.q=termsuggest.contextFilterQuery=contexts:tennis
 /suggest?suggest=truesuggest.build=truesuggest.q=termsuggest.contextFilterQuery=contexts:golf
  AND contexts:football
 {code}
 etc
 Given that the context filtering in currently only implemented by the 
 {code}AnalyzingInfixSuggester{code} and by the 
 {code}BlendedInfixSuggester{code}, this initial implementation will support 
 only these 2 lookup implementations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_51) - Build # 5178 - Failure!

2015-08-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5178/
Java: 64bit/jdk1.8.0_51 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ERROR: SolrIndexSearcher opens=51 closes=50

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=51 closes=50
at __randomizedtesting.SeedInfo.seed([17D627E1E324DBB9]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:467)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:233)
at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=4126, name=searcherExecutor-1849-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=4126, name=searcherExecutor-1849-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([17D627E1E324DBB9]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=4126, 

[jira] [Commented] (LUCENE-6755) more tests of ToChildBlockJoinScorer.advance

2015-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705912#comment-14705912
 ] 

ASF subversion and git services commented on LUCENE-6755:
-

Commit 1696870 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1696870 ]

LUCENE-6755: fix test bug and increase num of docs to improve chances of random 
query matching (merge r1696867)

 more tests of ToChildBlockJoinScorer.advance
 

 Key: LUCENE-6755
 URL: https://issues.apache.org/jira/browse/LUCENE-6755
 Project: Lucene - Core
  Issue Type: Test
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: Trunk, 5.4


 I recently helped diagnose some strange errors with ToChildBlockJoinQuery in 
 an older version of Solr which lead me to realize that the problem seemed to 
 have been fixed by LUCENE-6593 -- however the tests Adrien added in that 
 issue focused specifically the interaction of ToChildBlockJoinScorer with 
 with the (fairly new) aproximations support in Scorers (evidently that was 
 trigger that caused Adrien to investigate and make the fixes).
 However, in my initial diagnoses / testing, there were at least 2 (non 
 aproximation based) situations where the _old_ code was problematic:
 * ToChildBlockJoinScorer.advance didn't satisfy the nextDoc equivilent 
 behavior contract in the special case where the first doc in a segment was a 
 parent w/o any kids
 * in indexes that used multiple levels of hierarchy, a BooleanQuery that 
 combined multiple ToChildBlockJoinQueries using different parent filters -- 
 ie: find docs that are _children_ of X and _grandchildren_ of Y
 As mentioned, Adrien's changes in LUCENE-6593 seemed to fix both of these 
 problematic situations, but I'm opening this issue to track the addition of 
 some new tests to explicitly cover these situations to protect us against 
 future regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-20 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705982#comment-14705982
 ] 

Michael McCandless commented on LUCENE-6699:


OK I committed that patch, thanks [~daddywri], but unfortunately I hit another 
failure:

{noformat}
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPointField 
-Dtests.method=testRandomMedium -Dtests.seed=71C652F660067AD3 
-Dtests.multiplier=5 -Dtests.slow=true 
-Dtests.linedocsfile=/lucenedata/hudson.enwiki.random.lines.txt.fixed 
-Dtests.locale=zh_HK -Dtests.timezone=America/Mazatlan -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   9.72s | TestGeo3DPointField.testRandomMedium 
   [junit4] Throwable #1: 
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=14, name=T0, state=RUNNABLE, 
group=TGRP-TestGeo3DPointField]
   [junit4]at 
__randomizedtesting.SeedInfo.seed([71C652F660067AD3:CC18655E216319B5]:0)
   [junit4] Caused by: java.lang.AssertionError: expected WITHIN (1) or 
OVERLAPS (2) but got 0; shape=GeoCircle: {planetmodel=PlanetModel.SPHERE, 
center=[lat=-0.004282454525970269, lon=-1.6739831367422277E-4], 
radius=1.959639723134033E-6(1.1227908550176523E-4)}; XYZSolid=XYZSolid: 
{planetmodel=PlanetModel.SPHERE, isWholeWorld=false, minXplane=[A=1.0, B=0.0, 
C=0.0, D=-0.90807894643, side=1.0], maxXplane=[A=1.0, B=0.0, C=0.0, 
D=-0.908246908629, side=-1.0], minYplane=[A=0.0, B=1.0, C=0.0, 
D=1.693563105447845E-4, side=1.0], maxYplane=[A=0.0, B=1.0, C=0.0, 
D=1.6543724525666504E-4, side=-1.0], minZplane=[A=0.0, B=0.0, C=1.0, 
D=0.004284400993353207, side=1.0], maxZplane=[A=0.0, B=0.0, C=1.0, 
D=0.004280481873941856, side=-1.0]}
   [junit4]at 
__randomizedtesting.SeedInfo.seed([71C652F660067AD3]:0)
   [junit4]at 
org.apache.lucene.bkdtree3d.PointInGeo3DShapeQuery$1.scorer(PointInGeo3DShapeQuery.java:105)
   [junit4]at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:589)
   [junit4]at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:135)
   [junit4]at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
   [junit4]at 
org.apache.lucene.search.AssertingWeight.bulkScorer(AssertingWeight.java:69)
   [junit4]at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:618)
   [junit4]at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:92)
   [junit4]at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:425)
   [junit4]at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4._run(TestGeo3DPointField.java:586)
   [junit4]at 
org.apache.lucene.bkdtree3d.TestGeo3DPointField$4.run(TestGeo3DPointField.java:520)
   [junit4]   2 NOTE: test params are: 
codec=FastDecompressionCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=FAST_DECOMPRESSION,
 chunkSize=26391, maxDocsPerChunk=1, blockSize=992), 
termVectorsFormat=CompressingTermVectorsFormat(compressionMode=FAST_DECOMPRESSION,
 chunkSize=26391, blockSize=992)), 
sim=RandomSimilarityProvider(queryNorm=false,coord=yes): {}, locale=zh_HK, 
timezone=America/Mazatlan
   [junit4]   2 NOTE: Linux 3.13.0-46-generic amd64/Oracle Corporation 
1.8.0_40 (64-bit)/cpus=8,threads=1,free=20104,total=449314816
   [junit4]   2 NOTE: All tests run in this JVM: [TestGeo3DPointField]
   [junit4] Completed [1/1] in 9.85s, 1 test, 1 error  FAILURES!
{noformat}

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion 

[jira] [Commented] (SOLR-7948) MapReduceIndexerTool of solr 5.2.1 doesn't work with hadoop 2.7.1

2015-08-20 Thread davidchiu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706118#comment-14706118
 ] 

davidchiu commented on SOLR-7948:
-

I digged the problem again, I found that the httpclient-4.4.1 in solr 5.2.1 
conflicted with the httpclient-4.2.5 in hadoop 2.7.1, I replaced the 
httpclient-4.2.5 in hadoop 2.7.1(just under hadoop/common/lib) with the 
httpclient-4.4.1, it went through.

By the way, there is a bug in httpclient 4.4.1, in URLEncodedUtils.java, 
function of parse(final String s, final Charset charset) doesn't valid 
parameter of s, it will cause nullpointexception sometimes.


 MapReduceIndexerTool of solr 5.2.1 doesn't work with hadoop 2.7.1
 -

 Key: SOLR-7948
 URL: https://issues.apache.org/jira/browse/SOLR-7948
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.2.1
 Environment: OS:suse 11
 JDK:java version 1.7.0_65 
 Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
 Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
 HADOOP:hadoop 2.7.1 
 SOLR:5.2.1
Reporter: davidchiu
Assignee: Mark Miller

 When I used MapReduceIndexerTool of 5.2.1 to index files, I got follwoing 
 errors,but I used 4.9.0's MapReduceIndexerTool, it did work with hadoop 2.7.1.
 Exception ERROR as following:
 INFO  - 2015-08-20 11:44:45.155; [   ] org.apache.solr.hadoop.HeartBeater; 
 Heart beat reporting class is 
 org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
 INFO  - 2015-08-20 11:44:45.161; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Using this unpacked directory as 
 solr home: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
 INFO  - 2015-08-20 11:44:45.162; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Creating embedded Solr server with 
 solrHomeDir: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip,
  fs: 
 DFS[DFSClient[clientName=DFSClient_attempt_1440040092614_0004_r_01_0_1678264055_1,
  ugi=root (auth:SIMPLE)]], outputShardDir: 
 hdfs://127.0.0.1:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1
 INFO  - 2015-08-20 11:44:45.194; [   ] 
 org.apache.solr.core.SolrResourceLoader; new SolrResourceLoader for 
 directory: 
 '/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/'
 INFO  - 2015-08-20 11:44:45.206; [   ] org.apache.solr.hadoop.HeartBeater; 
 HeartBeat thread running
 INFO  - 2015-08-20 11:44:45.207; [   ] org.apache.solr.hadoop.HeartBeater; 
 Issuing heart beat for 1 threads
 INFO  - 2015-08-20 11:44:45.418; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Constructed instance information 
 solr.home 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
  
 (/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip),
  instance dir 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/,
  conf dir 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/conf/,
  writing index to solr.data.dir 
 hdfs://127.0.0.1:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1/data,
  with permdir 
 hdfs://127.0.0.10:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1
 INFO  - 2015-08-20 11:44:45.426; [   ] org.apache.solr.core.SolrXmlConfig; 
 Loading container configuration from 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/solr.xml
 INFO  - 2015-08-20 11:44:45.474; [   ] 
 org.apache.solr.core.CorePropertiesLocator; Config-defined core root 
 directory: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
 INFO  - 2015-08-20 11:44:45.503; [   ] org.apache.solr.core.CoreContainer; 
 New CoreContainer 1656436773
 INFO  - 2015-08-20 

[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-20 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705919#comment-14705919
 ] 

Michael McCandless commented on LUCENE-6699:


Ugh sorry I meant to include it in the copy/paste:

{noformat}
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestGeo3DPointField 
-Dtests.method=testRandomTiny -Dtests.seed=6700D50161C38330 
-Dtests.multiplier=5 -Dtests.slow=true 
-Dtests.linedocsfile=/lucenedata/hudson.enwiki.random.lines.txt.fixed 
-Dtests.locale=lt_LT -Dtests.timezone=Asia/Calcutta -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
{noformat}

It's interesting it's testRandomTiny: it should be easier to debug!

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.3 - Build # 11 - Still Failing

2015-08-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.3/11/

4 tests failed.
REGRESSION:  org.apache.solr.cloud.hdfs.HdfsBasicDistributedZk2Test.test

Error Message:
Server refused connection at: http://127.0.0.1:42930/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Server refused connection at: 
http://127.0.0.1:42930/collection1
at 
__randomizedtesting.SeedInfo.seed([D18E239D34220062:59DA1C479ADE6D9A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:567)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:234)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:226)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:943)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:958)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.brindDownShardIndexSomeDocsAndRecover(BasicDistributedZk2Test.java:365)
at 
org.apache.solr.cloud.BasicDistributedZk2Test.test(BasicDistributedZk2Test.java:98)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:963)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:938)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)

Re: VOTE: RC1 release of apache-solr-ref-guide-5.3.pdf

2015-08-20 Thread Anshum Gupta
+1 to releasing 5.3 RC1.

On Thu, Aug 20, 2015 at 9:07 AM, Cassandra Targett casstarg...@gmail.com
wrote:

 Please VOTE to release the following as apache-solr-ref-guide-5.3.pdf


 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.3-RC1/

 $cat apache-solr-ref-guide-5.3.pdf.sha1

 1255cba4413023e30aff345d30bce33846189975  apache-solr-ref-guide-5.3.pdf


 Here's my +1.

 Thanks,

 Cassandra




-- 
Anshum Gupta


[jira] [Commented] (SOLR-7734) MapReduce Indexer can error when using collection

2015-08-20 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706037#comment-14706037
 ] 

Gregory Chanan commented on SOLR-7734:
--

+1 lgtm.

I'll commit to trunk assuming the tests/precommit pass.  If you want it in 5.x 
as well please create a new patch  (probably need to change the xml for the 
version).

 MapReduce Indexer can error when using collection
 -

 Key: SOLR-7734
 URL: https://issues.apache.org/jira/browse/SOLR-7734
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.2.1
Reporter: Mike Drob
Assignee: Gregory Chanan
 Fix For: Trunk, 5.4

 Attachments: SOLR-7734.patch, SOLR-7734.patch, SOLR-7734.patch, 
 SOLR-7734.patch, SOLR-7734.patch, SOLR-7734.patch


 When running the MapReduceIndexerTool, it will usually pull a 
 {{solrconfig.xml}} from ZK for the collection that it is running against. 
 This can be problematic for several reasons:
 * Performance: The configuration in ZK will likely have several query 
 handlers, and lots of other components that don't make sense in an 
 indexing-only use of EmbeddedSolrServer (ESS).
 * Classpath Resources: If the Solr services are using some kind of additional 
 service (such as Sentry for auth) then the indexer will not have access to 
 the necessary configurations without the user jumping through several hoops.
 * Distinct Configuration Needs: Enabling Soft Commits on the ESS doesn't make 
 sense. There's other configurations that 
 * Update Chain Behaviours: I'm under the impression that UpdateChains may 
 behave differently in ESS than a SolrCloud cluster. Is it safe to depend on 
 consistent behaviour here?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7746) Ping requests stopped working with distrib=true in Solr 5.2.1

2015-08-20 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706065#comment-14706065
 ] 

Gregory Chanan commented on SOLR-7746:
--

Looks good Michael, a few questions/comments:

{code}
(params.getBool(ShardParams.IS_SHARD,false))
{code}

Convention is to put a space after ,.  Also, are you using tabs? please 
remove them.

{code}
handler = core.getRequestHandler( null );
ModifiableSolrParams wparams = new ModifiableSolrParams(params);
wparams.remove(CommonParams.QT);
req.setParams(wparams);
{code}
Is it correct ot remove the QT or replace the QT with the default handler you 
are calling?

{code}
 // In case it's a query for shard, return the result from delegated handler 
for distributed query to merge result
  if (params.getBool(ShardParams.IS_SHARD,false)) {
core.execute(handler, req, rsp );
ex = rsp.getException(); 
  } else {
   core.execute(handler, req, pingrsp );
ex = pingrsp.getException(); 
  }
...
 if (!params.getBool(ShardParams.IS_SHARD,false)) {
rsp.add( status, OK );
 }
{code}

Is all the if-elsing necessary?  What happens if you use pingrsp for whether 
IS_SHARD is true or not and then remove the if around the status check?  What 
you have now doesn't look correct to me, the non-IS_SHARD case won't have OK 
status, right?

 Ping requests stopped working with distrib=true in Solr 5.2.1
 -

 Key: SOLR-7746
 URL: https://issues.apache.org/jira/browse/SOLR-7746
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.2.1
Reporter: Alexey Serba
 Attachments: SOLR-7746.patch, SOLR-7746.patch, SOLR-7746.patch


 {noformat:title=steps to reproduce}
 # start 1 node SolrCloud cluster
 sh ./bin/solr -c -p 
 # create a test collection (we won’t use it, but I just want to it to load 
 solr configs to Zk)
 ./bin/solr create_collection -c test -d sample_techproducts_configs -p 
 # create another test collection with 2 shards
 curl 
 'http://localhost:/solr/admin/collections?action=CREATEname=test2numShards=2replicationFactor=1maxShardsPerNode=2collection.configName=test'
 # try distrib ping request
 curl 
 'http://localhost:/solr/test2/admin/ping?wt=jsondistrib=trueindent=true'
 ...
   error:{
 msg:Ping query caused exception: Error from server at 
 http://192.168.59.3:/solr/test2_shard2_replica1: Cannot execute the 
 PingRequestHandler recursively
 ...
 {noformat}
 {noformat:title=Exception}
 2116962 [qtp599601600-13] ERROR org.apache.solr.core.SolrCore  [test2 shard2 
 core_node1 test2_shard2_replica1] – org.apache.solr.common.SolrException: 
 Cannot execute the PingRequestHandler recursively
   at 
 org.apache.solr.handler.PingRequestHandler.handlePing(PingRequestHandler.java:246)
   at 
 org.apache.solr.handler.PingRequestHandler.handleRequestBody(PingRequestHandler.java:211)
   at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
   at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2653 - Failure!

2015-08-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2653/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SolrCloudExampleTest

Error Message:
ERROR: SolrIndexSearcher opens=26 closes=25

Stack Trace:
java.lang.AssertionError: ERROR: SolrIndexSearcher opens=26 closes=25
at __randomizedtesting.SeedInfo.seed([D8936E1B2BCF3727]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingSearchers(SolrTestCaseJ4.java:467)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:233)
at sun.reflect.GeneratedMethodAccessor90.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SolrCloudExampleTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.SolrCloudExampleTest: 
1) Thread[id=14423, name=searcherExecutor-7451-thread-1, state=WAITING, 
group=TGRP-SolrCloudExampleTest] at sun.misc.Unsafe.park(Native Method) 
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.SolrCloudExampleTest: 
   1) Thread[id=14423, name=searcherExecutor-7451-thread-1, state=WAITING, 
group=TGRP-SolrCloudExampleTest]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([D8936E1B2BCF3727]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SolrCloudExampleTest

Error Message:
There are still zombie threads that 

[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-20 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705915#comment-14705915
 ] 

Karl Wright commented on LUCENE-6699:
-

Do you happen to have the how-to-reproduce line?  This produces no failure: 
ant -Dtests.seed=6700D50161C38330 -Dtestcase=TestGeo3dPointField test



 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7953) BaseEditorialTransformer (elevated) doesn't work with LazyField

2015-08-20 Thread Ryan Josal (JIRA)
Ryan Josal created SOLR-7953:


 Summary: BaseEditorialTransformer (elevated) doesn't work with 
LazyField
 Key: SOLR-7953
 URL: https://issues.apache.org/jira/browse/SOLR-7953
 Project: Solr
  Issue Type: Bug
Affects Versions: 5.2, 4.10.2
Reporter: Ryan Josal


When using the QueryElevationComponent, using the [elevated] docTransformer 
doesn't always work.  In the case the document is a LazyDocument, 
BaseEditorialTransformer#getKey will return the LazyField.toString() which is 
Object#toString() which of course isn't going to match any of the uniqueKeys.  
The fix is to change getKey to check instanceof IndexableField instead of just 
Field.  I'm not sure the impact of this bug because I don't know how often 
LazyDocuments get used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-20 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright updated LUCENE-6699:

Attachment: LUCENE-6699.patch

Same cause, same fix.

The circle radius was about 1/6 the previous small value (1e-6), and the 
corresponding required increase in MINIMUM_RESOLUTION wound up being about 3x 
(to 6e-11).  There is still 4.5 orders of magnitude between these values.  If 
the pattern continues, we seem to be converging on a value for 
MINIMUM_RESOLUTION that is somewhere around 1e-9.

Unfortunately, I've tried MINIMUM_RESOLUTION values around 1e-10 before and 
other stuff started breaking as a result.  So I really hope this pattern 
doesn't continue.


 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7948) MapReduceIndexerTool of solr 5.2.1 doesn't work with hadoop 2.7.1

2015-08-20 Thread davidchiu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706108#comment-14706108
 ] 

davidchiu commented on SOLR-7948:
-

Do you mean that I should add mapreduce.job.user.classpath.first=true into 
mapred-site.xml?

 MapReduceIndexerTool of solr 5.2.1 doesn't work with hadoop 2.7.1
 -

 Key: SOLR-7948
 URL: https://issues.apache.org/jira/browse/SOLR-7948
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.2.1
 Environment: OS:suse 11
 JDK:java version 1.7.0_65 
 Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
 Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
 HADOOP:hadoop 2.7.1 
 SOLR:5.2.1
Reporter: davidchiu
Assignee: Mark Miller

 When I used MapReduceIndexerTool of 5.2.1 to index files, I got follwoing 
 errors,but I used 4.9.0's MapReduceIndexerTool, it did work with hadoop 2.7.1.
 Exception ERROR as following:
 INFO  - 2015-08-20 11:44:45.155; [   ] org.apache.solr.hadoop.HeartBeater; 
 Heart beat reporting class is 
 org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
 INFO  - 2015-08-20 11:44:45.161; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Using this unpacked directory as 
 solr home: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
 INFO  - 2015-08-20 11:44:45.162; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Creating embedded Solr server with 
 solrHomeDir: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip,
  fs: 
 DFS[DFSClient[clientName=DFSClient_attempt_1440040092614_0004_r_01_0_1678264055_1,
  ugi=root (auth:SIMPLE)]], outputShardDir: 
 hdfs://127.0.0.1:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1
 INFO  - 2015-08-20 11:44:45.194; [   ] 
 org.apache.solr.core.SolrResourceLoader; new SolrResourceLoader for 
 directory: 
 '/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/'
 INFO  - 2015-08-20 11:44:45.206; [   ] org.apache.solr.hadoop.HeartBeater; 
 HeartBeat thread running
 INFO  - 2015-08-20 11:44:45.207; [   ] org.apache.solr.hadoop.HeartBeater; 
 Issuing heart beat for 1 threads
 INFO  - 2015-08-20 11:44:45.418; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Constructed instance information 
 solr.home 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
  
 (/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip),
  instance dir 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/,
  conf dir 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/conf/,
  writing index to solr.data.dir 
 hdfs://127.0.0.1:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1/data,
  with permdir 
 hdfs://127.0.0.10:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1
 INFO  - 2015-08-20 11:44:45.426; [   ] org.apache.solr.core.SolrXmlConfig; 
 Loading container configuration from 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/solr.xml
 INFO  - 2015-08-20 11:44:45.474; [   ] 
 org.apache.solr.core.CorePropertiesLocator; Config-defined core root 
 directory: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
 INFO  - 2015-08-20 11:44:45.503; [   ] org.apache.solr.core.CoreContainer; 
 New CoreContainer 1656436773
 INFO  - 2015-08-20 11:44:45.503; [   ] org.apache.solr.core.CoreContainer; 
 Loading cores into CoreContainer 
 [instanceDir=/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/]
 INFO  - 2015-08-20 11:44:45.503; [   ] org.apache.solr.core.CoreContainer; 
 loading shared 

[jira] [Comment Edited] (SOLR-7948) MapReduceIndexerTool of solr 5.2.1 doesn't work with hadoop 2.7.1

2015-08-20 Thread davidchiu (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14706118#comment-14706118
 ] 

davidchiu edited comment on SOLR-7948 at 8/21/15 2:14 AM:
--

I digged the problem again, I found that the httpclient-4.4.1 in solr 5.2.1 
conflicted with the httpclient-4.2.5 in hadoop 2.7.1, I replaced the 
httpclient-4.2.5 in hadoop 2.7.1(just under hadoop/common/lib) with the 
httpclient-4.4.1, it went through.

By the way, there is a bug in httpclient 4.4.1, in URLEncodedUtils.java, 
function of parse(final String s, final Charset charset) doesn't verify 
parameter of s, it will cause nullpointexception sometimes.



was (Author: davidchiu):
I digged the problem again, I found that the httpclient-4.4.1 in solr 5.2.1 
conflicted with the httpclient-4.2.5 in hadoop 2.7.1, I replaced the 
httpclient-4.2.5 in hadoop 2.7.1(just under hadoop/common/lib) with the 
httpclient-4.4.1, it went through.

By the way, there is a bug in httpclient 4.4.1, in URLEncodedUtils.java, 
function of parse(final String s, final Charset charset) doesn't valid 
parameter of s, it will cause nullpointexception sometimes.


 MapReduceIndexerTool of solr 5.2.1 doesn't work with hadoop 2.7.1
 -

 Key: SOLR-7948
 URL: https://issues.apache.org/jira/browse/SOLR-7948
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.2.1
 Environment: OS:suse 11
 JDK:java version 1.7.0_65 
 Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
 Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
 HADOOP:hadoop 2.7.1 
 SOLR:5.2.1
Reporter: davidchiu
Assignee: Mark Miller

 When I used MapReduceIndexerTool of 5.2.1 to index files, I got follwoing 
 errors,but I used 4.9.0's MapReduceIndexerTool, it did work with hadoop 2.7.1.
 Exception ERROR as following:
 INFO  - 2015-08-20 11:44:45.155; [   ] org.apache.solr.hadoop.HeartBeater; 
 Heart beat reporting class is 
 org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
 INFO  - 2015-08-20 11:44:45.161; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Using this unpacked directory as 
 solr home: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
 INFO  - 2015-08-20 11:44:45.162; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Creating embedded Solr server with 
 solrHomeDir: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip,
  fs: 
 DFS[DFSClient[clientName=DFSClient_attempt_1440040092614_0004_r_01_0_1678264055_1,
  ugi=root (auth:SIMPLE)]], outputShardDir: 
 hdfs://127.0.0.1:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1
 INFO  - 2015-08-20 11:44:45.194; [   ] 
 org.apache.solr.core.SolrResourceLoader; new SolrResourceLoader for 
 directory: 
 '/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/'
 INFO  - 2015-08-20 11:44:45.206; [   ] org.apache.solr.hadoop.HeartBeater; 
 HeartBeat thread running
 INFO  - 2015-08-20 11:44:45.207; [   ] org.apache.solr.hadoop.HeartBeater; 
 Issuing heart beat for 1 threads
 INFO  - 2015-08-20 11:44:45.418; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Constructed instance information 
 solr.home 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
  
 (/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip),
  instance dir 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/,
  conf dir 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/conf/,
  writing index to solr.data.dir 
 hdfs://127.0.0.1:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1/data,
  with permdir 
 hdfs://127.0.0.10:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1
 INFO  - 2015-08-20 11:44:45.426; [   ] org.apache.solr.core.SolrXmlConfig; 
 Loading container configuration from 
 

[jira] [Commented] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins

2015-08-20 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705399#comment-14705399
 ] 

Shalin Shekhar Mangar commented on SOLR-7602:
-

I guess these commits fixed the problem. Can we close this issue?

 Frequent MultiThreadedOCPTest failures on Jenkins
 -

 Key: SOLR-7602
 URL: https://issues.apache.org/jira/browse/SOLR-7602
 Project: Solr
  Issue Type: Bug
Reporter: Anshum Gupta
 Fix For: 5.2

 Attachments: SOLR-7602.patch, SOLR-7602.patch


 The number of failed MultiThreadedOCPTest runs on Jenkins has gone up 
 drastically since Apr 30, 2015.
 {code}
 REGRESSION:  org.apache.solr.cloud.MultiThreadedOCPTest.test
 Error Message:
 Captured an uncaught exception in thread: Thread[id=6313, 
 name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, 
 group=TGRP-MultiThreadedOCPTest]
 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=6313, 
 name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, 
 group=TGRP-MultiThreadedOCPTest]
 at 
 __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0)
 Caused by: java.lang.AssertionError: Too many closes on SolrCore
 at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0)
 at org.apache.solr.core.SolrCore.close(SolrCore.java:1138)
 at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219)
 at 
 org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {code}
 Last failure:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6755) more tests of ToChildBlockJoinScorer.advance

2015-08-20 Thread Hoss Man (JIRA)
Hoss Man created LUCENE-6755:


 Summary: more tests of ToChildBlockJoinScorer.advance
 Key: LUCENE-6755
 URL: https://issues.apache.org/jira/browse/LUCENE-6755
 Project: Lucene - Core
  Issue Type: Test
Reporter: Hoss Man


I recently helped diagnose some strange errors with ToChildBlockJoinQuery in an 
older version of Solr which lead me to realize that the problem seemed to have 
been fixed by LUCENE-6593 -- however the tests Adrien added in that issue 
focused specifically the interaction of ToChildBlockJoinScorer with with the 
(fairly new) aproximations support in Scorers (evidently that was trigger that 
caused Adrien to investigate and make the fixes).

However, in my initial diagnoses / testing, there were at least 2 (non 
aproximation based) situations where the _old_ code was problematic:

* ToChildBlockJoinScorer.advance didn't satisfy the nextDoc equivilent 
behavior contract in the special case where the first doc in a segment was a 
parent w/o any kids
* in indexes that used multiple levels of hierarchy, a BooleanQuery that 
combined multiple ToChildBlockJoinQueries using different parent filters -- ie: 
find docs that are _children_ of X and _grandchildren_ of Y

As mentioned, Adrien's changes in LUCENE-6593 seemed to fix both of these 
problematic situations, but I'm opening this issue to track the addition of 
some new tests to explicitly cover these situations to protect us against 
future regression.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7950) Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)

2015-08-20 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705499#comment-14705499
 ] 

Gregory Chanan commented on SOLR-7950:
--

bq. 
https://github.com/apache/lucene-solr/blob/trunk/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientConfigurer.java#L80

press y when linked these so you get a static sha, i.e. 
https://github.com/apache/lucene-solr/blob/f9799d3b7f62405b96480dead52c5611e99ab3e7/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientConfigurer.java#L80

 Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)
 -

 Key: SOLR-7950
 URL: https://issues.apache.org/jira/browse/SOLR-7950
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3, Trunk
Reporter: Hrishikesh Gadre
Assignee: Gregory Chanan

 When using kerberos authentication mechanism (SPNEGO auth scheme), the Apache 
 Http client is incorrectly configured with *all* auth schemes (e.g. Basic, 
 Digest, NTLM, Kerberos, Negotiate etc.) instead of just 'Negotiate'. 
 This issue was identified after configuring Solr with both Basic + Negotiate 
 authentication schemes simultaneously. The problem in this case is that Http 
 client is configured with Kerberos credentials and the default (and 
 incorrect) auth scheme configuration prefers Basic authentication over 
 Kerberos. Since the basic authentication credentials are missing, the 
 authentication and as a result the Http request fails. (I ran into this 
 problem while creating a collection where there is an internal communication 
 between Solr servers).
 The root cause for this issue is that, AbstractHttpClient::getAuthSchemes() 
 API call prepares an AuthSchemeRegistry instance with all possible 
 authentication schemes. Hence when we register the SPNEGO auth scheme in Solr 
 codebase, it overrides the previous configuration for SPNEGO - but doesn't 
 remove the other auth schemes from the client configuration. Please take a 
 look at relevant code snippet.
 https://github.com/apache/lucene-solr/blob/trunk/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientConfigurer.java#L80
 A trivial fix would be to prepare a new AuthSchemeRegistry instance 
 configured with just SPENGO mechanism and set it in the HttpClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b60) - Build # 13934 - Failure!

2015-08-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13934/
Java: 32bit/jdk1.9.0-ea-b60 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.lucene.search.join.TestBlockJoin.testMultiChildQueriesOfDiffParentLevels

Error Message:
Parent query yields document which is not matched by parents filter, docID=141

Stack Trace:
java.lang.IllegalStateException: Parent query yields document which is not 
matched by parents filter, docID=141
at 
__randomizedtesting.SeedInfo.seed([2E6B8DACD64A7ADA:FF8E926036DC83BC]:0)
at 
org.apache.lucene.search.join.ToChildBlockJoinQuery$ToChildBlockJoinScorer.validateParentDoc(ToChildBlockJoinQuery.java:245)
at 
org.apache.lucene.search.join.ToChildBlockJoinQuery$ToChildBlockJoinScorer.advance(ToChildBlockJoinQuery.java:271)
at 
org.apache.lucene.search.AssertingScorer.advance(AssertingScorer.java:115)
at 
org.apache.lucene.search.ConjunctionDISI.doNext(ConjunctionDISI.java:118)
at 
org.apache.lucene.search.ConjunctionDISI.nextDoc(ConjunctionDISI.java:151)
at 
org.apache.lucene.search.ConjunctionScorer.nextDoc(ConjunctionScorer.java:62)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:216)
at 
org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:169)
at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
at 
org.apache.lucene.search.AssertingBulkScorer.score(AssertingBulkScorer.java:70)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621)
at 
org.apache.lucene.search.AssertingIndexSearcher.search(AssertingIndexSearcher.java:92)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:425)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:544)
at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:402)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:413)
at 
org.apache.lucene.search.join.TestBlockJoin.testMultiChildQueriesOfDiffParentLevels(TestBlockJoin.java:1720)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: VOTE: RC1 release of apache-solr-ref-guide-5.3.pdf

2015-08-20 Thread Steve Rowe
+1 to release RC1.

I found a few minor formatting issues and a typo that I’ll fix, doesn’t warrant 
a respin.

Steve

 On Aug 20, 2015, at 12:07 PM, Cassandra Targett casstarg...@gmail.com wrote:
 
 Please VOTE to release the following as apache-solr-ref-guide-5.3.pdf
 
 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.3-RC1/
 
 $cat apache-solr-ref-guide-5.3.pdf.sha1 
 1255cba4413023e30aff345d30bce33846189975  apache-solr-ref-guide-5.3.pdf
 
 
 
 Here's my +1.
 
 Thanks,
 
 Cassandra
 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7560) Parallel SQL Support

2015-08-20 Thread Susheel Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705552#comment-14705552
 ] 

Susheel Kumar commented on SOLR-7560:
-

Thanks, Eric for pointing server dist target. Now I am able to run basic SQL.  
Will start looking into deeper.

 Parallel SQL Support
 

 Key: SOLR-7560
 URL: https://issues.apache.org/jira/browse/SOLR-7560
 Project: Solr
  Issue Type: New Feature
  Components: clients - java, search
Reporter: Joel Bernstein
 Fix For: Trunk

 Attachments: SOLR-7560.calcite.patch, SOLR-7560.patch, 
 SOLR-7560.patch, SOLR-7560.patch, SOLR-7560.patch


 This ticket provides support for executing *Parallel SQL* queries across 
 SolrCloud collections. The SQL engine will be built on top of the Streaming 
 API (SOLR-7082), which provides support for *parallel relational algebra* and 
 *real-time map-reduce*.
 Basic design:
 1) A new SQLHandler will be added to process SQL requests. The SQL statements 
 will be compiled to live Streaming API objects for parallel execution across 
 SolrCloud worker nodes.
 2) SolrCloud collections will be abstracted as *Relational Tables*. 
 3) The Presto SQL parser will be used to parse the SQL statements.
 4) A JDBC thin client will be added as a Solrj client.
 This ticket will focus on putting the framework in place and providing basic 
 SELECT support and GROUP BY aggregate support.
 Future releases will build on this framework to provide additional SQL 
 features.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7951) LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to handle this request exception, even usage errors

2015-08-20 Thread Elaine Cario (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705436#comment-14705436
 ] 

Elaine Cario commented on SOLR-7951:


Yes - the original exception does get wrapped, and we'll need to unwrap it in 
our app as a workaround so we can get at the correct exception to display back 
to the user (and keep Operations from panicking!), at least until it can be 
fixed in Solr.  I've pasted the full message below (with server info redacted):

lst name=error
  str name=msgorg.apache.solr.client.solrj.SolrServerException: No live 
SolrServers available to handle this request:[http://...]/str
  str name=traceorg.apache.solr.common.SolrException: 
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://...]
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:311)
at 
com.wolterskluwer.atlas.solr.requesthandlers.WKRequestHandler.handleRequestBody(WKRequestHandler.java:123)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
at 
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at 
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at 
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at 
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at 
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:170)
at 
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:98)
at 
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:950)
at 
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at 
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:408)
at 
org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1040)
at 
org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:607)
at 
org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:313)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.client.solrj.SolrServerException: No live 
SolrServers available to handle this request:[http://...]
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:319)
at 
org.apache.solr.handler.component.HttpShardHandlerFactory.makeLoadBalancedRequest(HttpShardHandlerFactory.java:205)
at 
org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:162)
at 
org.apache.solr.handler.component.HttpShardHandler$1.call(HttpShardHandler.java:119)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
... 3 more
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Exceeded 
maximum of 1000 basic queries.
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:550)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.doRequest(LBHttpSolrServer.java:340)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:309)
... 9 more
/str

 LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to 
 handle this request exception, even usage errors
 

 Key: SOLR-7951
 URL: https://issues.apache.org/jira/browse/SOLR-7951
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.2.1
Reporter: Elaine Cario
Priority: Minor

 We were experiencing many No live SolrServers available to handle this 
 request exception, even though we 

[jira] [Assigned] (SOLR-7950) Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)

2015-08-20 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan reassigned SOLR-7950:


Assignee: Gregory Chanan

 Invalid auth scheme configuration of Http client when using Kerberos (SPNEGO)
 -

 Key: SOLR-7950
 URL: https://issues.apache.org/jira/browse/SOLR-7950
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10.3, Trunk
Reporter: Hrishikesh Gadre
Assignee: Gregory Chanan

 When using kerberos authentication mechanism (SPNEGO auth scheme), the Apache 
 Http client is incorrectly configured with *all* auth schemes (e.g. Basic, 
 Digest, NTLM, Kerberos, Negotiate etc.) instead of just 'Negotiate'. 
 This issue was identified after configuring Solr with both Basic + Negotiate 
 authentication schemes simultaneously. The problem in this case is that Http 
 client is configured with Kerberos credentials and the default (and 
 incorrect) auth scheme configuration prefers Basic authentication over 
 Kerberos. Since the basic authentication credentials are missing, the 
 authentication and as a result the Http request fails. (I ran into this 
 problem while creating a collection where there is an internal communication 
 between Solr servers).
 The root cause for this issue is that, AbstractHttpClient::getAuthSchemes() 
 API call prepares an AuthSchemeRegistry instance with all possible 
 authentication schemes. Hence when we register the SPNEGO auth scheme in Solr 
 codebase, it overrides the previous configuration for SPNEGO - but doesn't 
 remove the other auth schemes from the client configuration. Please take a 
 look at relevant code snippet.
 https://github.com/apache/lucene-solr/blob/trunk/solr/solrj/src/java/org/apache/solr/client/solrj/impl/Krb5HttpClientConfigurer.java#L80
 A trivial fix would be to prepare a new AuthSchemeRegistry instance 
 configured with just SPENGO mechanism and set it in the HttpClient.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene/Solr release notes

2015-08-20 Thread Noble Paul
I’ve made drafts for the Lucene and Solr release notes - please feel
free to edit or suggest edits:

Lucene: https://wiki.apache.org/lucene-java/ReleaseNote53

Solr: http://wiki.apache.org/solr/ReleaseNote53


-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6755) more tests of ToChildBlockJoinScorer.advance

2015-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705529#comment-14705529
 ] 

ASF subversion and git services commented on LUCENE-6755:
-

Commit 1696837 from hoss...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1696837 ]

LUCENE-6755: more tests of ToChildBlockJoinScorer.advance (merge r1696834)

 more tests of ToChildBlockJoinScorer.advance
 

 Key: LUCENE-6755
 URL: https://issues.apache.org/jira/browse/LUCENE-6755
 Project: Lucene - Core
  Issue Type: Test
Reporter: Hoss Man
 Fix For: Trunk, 5.4


 I recently helped diagnose some strange errors with ToChildBlockJoinQuery in 
 an older version of Solr which lead me to realize that the problem seemed to 
 have been fixed by LUCENE-6593 -- however the tests Adrien added in that 
 issue focused specifically the interaction of ToChildBlockJoinScorer with 
 with the (fairly new) aproximations support in Scorers (evidently that was 
 trigger that caused Adrien to investigate and make the fixes).
 However, in my initial diagnoses / testing, there were at least 2 (non 
 aproximation based) situations where the _old_ code was problematic:
 * ToChildBlockJoinScorer.advance didn't satisfy the nextDoc equivilent 
 behavior contract in the special case where the first doc in a segment was a 
 parent w/o any kids
 * in indexes that used multiple levels of hierarchy, a BooleanQuery that 
 combined multiple ToChildBlockJoinQueries using different parent filters -- 
 ie: find docs that are _children_ of X and _grandchildren_ of Y
 As mentioned, Adrien's changes in LUCENE-6593 seemed to fix both of these 
 problematic situations, but I'm opening this issue to track the addition of 
 some new tests to explicitly cover these situations to protect us against 
 future regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6755) more tests of ToChildBlockJoinScorer.advance

2015-08-20 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved LUCENE-6755.
--
   Resolution: Fixed
 Assignee: Hoss Man
Fix Version/s: 5.4
   Trunk

 more tests of ToChildBlockJoinScorer.advance
 

 Key: LUCENE-6755
 URL: https://issues.apache.org/jira/browse/LUCENE-6755
 Project: Lucene - Core
  Issue Type: Test
Reporter: Hoss Man
Assignee: Hoss Man
 Fix For: Trunk, 5.4


 I recently helped diagnose some strange errors with ToChildBlockJoinQuery in 
 an older version of Solr which lead me to realize that the problem seemed to 
 have been fixed by LUCENE-6593 -- however the tests Adrien added in that 
 issue focused specifically the interaction of ToChildBlockJoinScorer with 
 with the (fairly new) aproximations support in Scorers (evidently that was 
 trigger that caused Adrien to investigate and make the fixes).
 However, in my initial diagnoses / testing, there were at least 2 (non 
 aproximation based) situations where the _old_ code was problematic:
 * ToChildBlockJoinScorer.advance didn't satisfy the nextDoc equivilent 
 behavior contract in the special case where the first doc in a segment was a 
 parent w/o any kids
 * in indexes that used multiple levels of hierarchy, a BooleanQuery that 
 combined multiple ToChildBlockJoinQueries using different parent filters -- 
 ie: find docs that are _children_ of X and _grandchildren_ of Y
 As mentioned, Adrien's changes in LUCENE-6593 seemed to fix both of these 
 problematic situations, but I'm opening this issue to track the addition of 
 some new tests to explicitly cover these situations to protect us against 
 future regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7951) LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to handle this request exception, even usage errors

2015-08-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705406#comment-14705406
 ] 

Mark Miller commented on SOLR-7951:
---

Indeed. Thanks for the report, needs to be fixed.

 LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to 
 handle this request exception, even usage errors
 

 Key: SOLR-7951
 URL: https://issues.apache.org/jira/browse/SOLR-7951
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.2.1
Reporter: Elaine Cario
Priority: Minor

 We were experiencing many No live SolrServers available to handle this 
 request exception, even though we saw no outages with any of our servers.  
 It turned out the actual exceptions were related to the use of wildcards in 
 span queries (and in some cases other invalid queries or usage-type issues). 
 Traced it back to LBHttpSolrClient which was wrapping all exceptions, even 
 plain SolrExceptions, in that outer exception.  
 Instead, wrapping in the out exception should be reserved for true 
 communication issues in SolrCloud, and usage exceptions should be thrown as 
 is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7951) LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to handle this request exception, even usage errors

2015-08-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7951?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705413#comment-14705413
 ] 

Mark Miller commented on SOLR-7951:
---

bq. No live SolrServers available to handle this request

It does look like you should still get the original stack trace as a root 
exception? The message is still not being used correctly, but can you confirm 
that?

 LBHttpSolrClient wraps ALL exceptions in No live SolrServers available to 
 handle this request exception, even usage errors
 

 Key: SOLR-7951
 URL: https://issues.apache.org/jira/browse/SOLR-7951
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 5.2.1
Reporter: Elaine Cario
Priority: Minor

 We were experiencing many No live SolrServers available to handle this 
 request exception, even though we saw no outages with any of our servers.  
 It turned out the actual exceptions were related to the use of wildcards in 
 span queries (and in some cases other invalid queries or usage-type issues). 
 Traced it back to LBHttpSolrClient which was wrapping all exceptions, even 
 plain SolrExceptions, in that outer exception.  
 Instead, wrapping in the out exception should be reserved for true 
 communication issues in SolrCloud, and usage exceptions should be thrown as 
 is.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7602) Frequent MultiThreadedOCPTest failures on Jenkins

2015-08-20 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta resolved SOLR-7602.

Resolution: Fixed

Marking this as resolved as we haven't seen these since fix was committed.

 Frequent MultiThreadedOCPTest failures on Jenkins
 -

 Key: SOLR-7602
 URL: https://issues.apache.org/jira/browse/SOLR-7602
 Project: Solr
  Issue Type: Bug
Reporter: Anshum Gupta
 Fix For: 5.2

 Attachments: SOLR-7602.patch, SOLR-7602.patch


 The number of failed MultiThreadedOCPTest runs on Jenkins has gone up 
 drastically since Apr 30, 2015.
 {code}
 REGRESSION:  org.apache.solr.cloud.MultiThreadedOCPTest.test
 Error Message:
 Captured an uncaught exception in thread: Thread[id=6313, 
 name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, 
 group=TGRP-MultiThreadedOCPTest]
 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=6313, 
 name=parallelCoreAdminExecutor-1988-thread-15, state=RUNNABLE, 
 group=TGRP-MultiThreadedOCPTest]
 at 
 __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B:97852558779175A3]:0)
 Caused by: java.lang.AssertionError: Too many closes on SolrCore
 at __randomizedtesting.SeedInfo.seed([1FD11A82D96D185B]:0)
 at org.apache.solr.core.SolrCore.close(SolrCore.java:1138)
 at org.apache.solr.common.util.IOUtils.closeQuietly(IOUtils.java:31)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:535)
 at org.apache.solr.core.CoreContainer.create(CoreContainer.java:494)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:598)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:212)
 at 
 org.apache.solr.handler.admin.CoreAdminHandler$ParallelCoreAdminHandlerThread.run(CoreAdminHandler.java:1219)
 at 
 org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$1.run(ExecutorUtil.java:148)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:745)
 {code}
 Last failure:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/12665/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6755) more tests of ToChildBlockJoinScorer.advance

2015-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705458#comment-14705458
 ] 

ASF subversion and git services commented on LUCENE-6755:
-

Commit 1696834 from hoss...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1696834 ]

LUCENE-6755: more tests of ToChildBlockJoinScorer.advance

 more tests of ToChildBlockJoinScorer.advance
 

 Key: LUCENE-6755
 URL: https://issues.apache.org/jira/browse/LUCENE-6755
 Project: Lucene - Core
  Issue Type: Test
Reporter: Hoss Man

 I recently helped diagnose some strange errors with ToChildBlockJoinQuery in 
 an older version of Solr which lead me to realize that the problem seemed to 
 have been fixed by LUCENE-6593 -- however the tests Adrien added in that 
 issue focused specifically the interaction of ToChildBlockJoinScorer with 
 with the (fairly new) aproximations support in Scorers (evidently that was 
 trigger that caused Adrien to investigate and make the fixes).
 However, in my initial diagnoses / testing, there were at least 2 (non 
 aproximation based) situations where the _old_ code was problematic:
 * ToChildBlockJoinScorer.advance didn't satisfy the nextDoc equivilent 
 behavior contract in the special case where the first doc in a segment was a 
 parent w/o any kids
 * in indexes that used multiple levels of hierarchy, a BooleanQuery that 
 combined multiple ToChildBlockJoinQueries using different parent filters -- 
 ie: find docs that are _children_ of X and _grandchildren_ of Y
 As mentioned, Adrien's changes in LUCENE-6593 seemed to fix both of these 
 problematic situations, but I'm opening this issue to track the addition of 
 some new tests to explicitly cover these situations to protect us against 
 future regression.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7952) Change DeltaImport from HashSet to LinkedHashSet.

2015-08-20 Thread Pablo Lozano (JIRA)
Pablo Lozano created SOLR-7952:
--

 Summary: Change DeltaImport from HashSet to LinkedHashSet.
 Key: SOLR-7952
 URL: https://issues.apache.org/jira/browse/SOLR-7952
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 5.2.1
Reporter: Pablo Lozano
Priority: Minor


This is only a minor modification which on some cases might be useful for 
certain custom DataSources or ImportHandlers.

The way my imports work is by fetching on batches, So I need to store those 
batches on a disk cache for a certain time as they are not required on the mean 
time.

And also use some lazy loading as my batches are not initialized by my custom 
iterators until the time they are iterated for the first time,

My issue comes from that the order in which I pass the ids of my documents to 
the ImporHandler during the FIND_DELTA step is not the same order they are 
tried to be fetch during the DELTA_DUMP step. It causes my batches to be 
initialized when only one of them could be done at a time.

What I would like is to simply change the HashSet used on the collectDelta 
method to a LinkedHashSet. This would help as we would obtain a predictable 
order of documents.

This may be a very specific case but the change is simple and shouldn't impact 
on anything.

The second option would be to create a deltaImportQuery like that would work 
like: select * from table where last_modified gt; '${dih.last_index_time}'.

I can issue the patch for this.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-7843) Importing Deltal create a memory leak

2015-08-20 Thread Pablo Lozano (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7843?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pablo Lozano closed SOLR-7843.
--
Resolution: Not A Problem

 Importing Deltal create a memory leak
 -

 Key: SOLR-7843
 URL: https://issues.apache.org/jira/browse/SOLR-7843
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 5.2.1
Reporter: Pablo Lozano
  Labels: memory-leak

 The org.apache.solr.handler.dataimport.SolrWriter is not correctly cleaning 
 itself after finishing importing Deltas as the SetObject deltaKeys is not 
 being cleaned after the process has finished. 
 When using a custom importer or DataSource for my case I need to add 
 additional parameters to the delta keys.
 When the data import finishes the DeltaKeys is not set back to null and the 
 DataImporter, DocBuilder and the SolrWriter are mantained as live objects 
 because there are being referenced by the infoRegistry of the SolrCore 
 which seems to be used for Jmx information.
 It appears that starting a second delta import did not freed the memory which 
 may cause on the long run an OutOfMemory, I have not checked if starting a 
 full import would break the references and free the memory.
 An easy fix is possible which  would be to add to the SolrWriter deltaKeys = 
 null; on the close method.
 Or nullify the writer on DocBuilder after being used on the method execute();



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6754) Optimize IndexSearcher.count for simple queries

2015-08-20 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6754:


 Summary: Optimize IndexSearcher.count for simple queries
 Key: LUCENE-6754
 URL: https://issues.apache.org/jira/browse/LUCENE-6754
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


IndexSearcher.count currently always create a collector to compute the number 
of hits, but it could optimize some queries like MatchAllDocsQuery or TermQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6754) Optimize IndexSearcher.count for simple queries

2015-08-20 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6754:
-
Attachment: LUCENE-6754.patch

Here is a patch. count(MatchAllDocsQuery) returns reader.numDocs() and 
count(TermQuery) returns the sum of the doc freqs if there are no deletions.

 Optimize IndexSearcher.count for simple queries
 ---

 Key: LUCENE-6754
 URL: https://issues.apache.org/jira/browse/LUCENE-6754
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6754.patch


 IndexSearcher.count currently always create a collector to compute the number 
 of hits, but it could optimize some queries like MatchAllDocsQuery or 
 TermQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705453#comment-14705453
 ] 

Mark Miller commented on SOLR-6760:
---

bq. The rate of processing state operations went up from 4550 requests/min to 
26083 requests/min i.e. a boost of 473%!

Nice, huge win!

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.4

 Attachments: SOLR-6760-branch_5x.patch, SOLR-6760.patch, 
 SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7836) Possible deadlock when closing refcounted index writers.

2015-08-20 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705478#comment-14705478
 ] 

Erick Erickson commented on SOLR-7836:
--

bq: I tried applying the last patch and running TestStressReorder and luckily 
it does fail often for me.

Yep, I saw that last night. I looked a bit at whether it was a test artifact 
and apparently it's not so I was going to dive into that today.

Anyway, since you're working up alternatives, I'll leave it to you. The current 
checkin (not the latest patch which I won't commit) at least avoids the 
deadlock that started me down this path in the first place. Whether it creates 
other issues is, of course, the $64K question. Let me know if there's anything 
I can do besides cheer you on.


 Possible deadlock when closing refcounted index writers.
 

 Key: SOLR-7836
 URL: https://issues.apache.org/jira/browse/SOLR-7836
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson
 Fix For: Trunk, 5.4

 Attachments: SOLR-7836-reorg.patch, SOLR-7836-synch.patch, 
 SOLR-7836.patch, SOLR-7836.patch, SOLR-7836.patch, deadlock_3.res.zip, 
 deadlock_5_pass_iw.res.zip, deadlock_test


 Preliminary patch for what looks like a possible race condition between 
 writerFree and pauseWriter in DefaultSorlCoreState.
 Looking for comments and/or why I'm completely missing the boat.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6743) Allow Ivy lockStrategy to be overridden by system property.

2015-08-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705500#comment-14705500
 ] 

Mark Miller commented on LUCENE-6743:
-

Supposedly part of this issue (leaving locks behind with artifact-lock impl) 
was fixed in 2.3, but you can still see it happen in 2.3 and 2.4.: 
https://issues.apache.org/jira/browse/IVY-1388?filter=-3

 Allow Ivy lockStrategy to be overridden by system property.
 ---

 Key: LUCENE-6743
 URL: https://issues.apache.org/jira/browse/LUCENE-6743
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6743.patch


 The current hard code lock strategy is imperfect and can fail under parallel 
 load. With Ivy 2.4 there is a better option in artifact-lock-nio. We should 
 allow the lock strategy to be overrideen like the resolutionCacheDir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr release notes

2015-08-20 Thread Noble Paul
I didn't know what was really the important features , So the Lucene
release note is a just TODO page . Please pitch in and fill it

https://wiki.apache.org/lucene-java/ReleaseNote53

On Fri, Aug 21, 2015 at 12:00 AM, Noble Paul noble.p...@gmail.com wrote:
 I’ve made drafts for the Lucene and Solr release notes - please feel
 free to edit or suggest edits:

 Lucene: https://wiki.apache.org/lucene-java/ReleaseNote53

 Solr: http://wiki.apache.org/solr/ReleaseNote53


 --
 -
 Noble Paul



-- 
-
Noble Paul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6689) Odd analysis problem with WDF, appears to be triggered by preceding analysis components

2015-08-20 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705543#comment-14705543
 ] 

Shawn Heisey commented on LUCENE-6689:
--

I can work around the specific queries that caused the problem if I make index 
and query WDF analysis exactly the same ... but there's a problem even then.

As a test, I entirely removed the query analysis above and removed the type 
attribute from the index analysis so it applies to both.  I put this fieldType 
into Solr 5.2.1 and went to the analysis screen.

A phrase search for aaa bbb when the indexed value was aaa-bbb: ccc does 
not match, because the positions are wrong.  I believe that it *should* match.  
A user would most likely expect it to match.

 Odd analysis problem with WDF, appears to be triggered by preceding analysis 
 components
 ---

 Key: LUCENE-6689
 URL: https://issues.apache.org/jira/browse/LUCENE-6689
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 4.8
Reporter: Shawn Heisey

 This problem shows up for me in Solr, but I believe the issue is down at the 
 Lucene level, so I've opened the issue in the LUCENE project.  We can move it 
 if necessary.
 I've boiled the problem down to this minimum Solr fieldType:
 {noformat}
 fieldType name=testType class=solr.TextField
 sortMissingLast=true positionIncrementGap=100
   analyzer type=index
 tokenizer
 class=org.apache.lucene.analysis.icu.segmentation.ICUTokenizerFactory
 rulefiles=Latn:Latin-break-only-on-whitespace.rbbi/
 filter class=solr.PatternReplaceFilterFactory
   pattern=^(\p{Punct}*)(.*?)(\p{Punct}*)$
   replacement=$2
 /
 filter class=solr.WordDelimiterFilterFactory
   splitOnCaseChange=1
   splitOnNumerics=1
   stemEnglishPossessive=1
   generateWordParts=1
   generateNumberParts=1
   catenateWords=1
   catenateNumbers=1
   catenateAll=0
   preserveOriginal=1
 /
   /analyzer
   analyzer type=query
 tokenizer
 class=org.apache.lucene.analysis.icu.segmentation.ICUTokenizerFactory
 rulefiles=Latn:Latin-break-only-on-whitespace.rbbi/
 filter class=solr.PatternReplaceFilterFactory
   pattern=^(\p{Punct}*)(.*?)(\p{Punct}*)$
   replacement=$2
 /
 filter class=solr.WordDelimiterFilterFactory
   splitOnCaseChange=1
   splitOnNumerics=1
   stemEnglishPossessive=1
   generateWordParts=1
   generateNumberParts=1
   catenateWords=0
   catenateNumbers=0
   catenateAll=0
   preserveOriginal=0
 /
   /analyzer
 /fieldType
 {noformat}
 On Solr 4.7, if this type is given the input aaa-bbb: ccc then index 
 analysis puts aaa at term position 1 and bbb at term position 2.  This seems 
 perfectly reasonable to me.  In Solr 4.9, both terms end up at position 2.  
 This causes phrase queries which used to work to return zero hits.  The exact 
 text of the phrase query is in the original documents that match on 4.7.
 If the custom rbbi (which is included unmodified from the lucene icu analysis 
 source code) is not used, then the problem doesn't happen, because the 
 punctuation doesn't make it to the PRF.  If the PatternReplaceFilterFactory 
 is not present, then the problem doesn't happen.
 I can work around the problem by setting luceneMatchVersion to 4.7, but I 
 think the behavior is a bug, and I would rather not continue to use 4.7 
 analysis when I upgrade to 5.x, which I hope to do soon.
 Whether luceneMatchversion is LUCENE_47 or LUCENE_4_9, query analysis puts 
 aaa at term position 1 and bbb at term position 2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60-ea-b25) - Build # 13931 - Failure!

2015-08-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/13931/
Java: 32bit/jdk1.8.0_60-ea-b25 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [TransactionLog]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [TransactionLog]
at __randomizedtesting.SeedInfo.seed([5C8A030049025509]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:236)
at sun.reflect.GeneratedMethodAccessor46.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:799)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 9755 lines...]
   [junit4] Suite: org.apache.solr.cloud.HttpPartitionTest
   [junit4]   2 Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-trunk-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.HttpPartitionTest_5C8A030049025509-001/init-core-data-001
   [junit4]   2 423168 INFO  
(SUITE-HttpPartitionTest-seed#[5C8A030049025509]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2 423170 INFO  
(TEST-HttpPartitionTest.test-seed#[5C8A030049025509]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2 423170 INFO  (Thread-2484) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2 423170 INFO  (Thread-2484) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2 423270 INFO  
(TEST-HttpPartitionTest.test-seed#[5C8A030049025509]) [] 
o.a.s.c.ZkTestServer start zk server on port:49703
   [junit4]   2 423270 INFO  
(TEST-HttpPartitionTest.test-seed#[5C8A030049025509]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2 423270 INFO  
(TEST-HttpPartitionTest.test-seed#[5C8A030049025509]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2 423272 INFO  (zkCallback-849-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@2b7b2b name:ZooKeeperConnection 
Watcher:127.0.0.1:49703 got event WatchedEvent state:SyncConnected type:None 
path:null path:null type:None
   [junit4]   2 423272 INFO  
(TEST-HttpPartitionTest.test-seed#[5C8A030049025509]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2 423272 INFO  
(TEST-HttpPartitionTest.test-seed#[5C8A030049025509]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2 423272 INFO  
(TEST-HttpPartitionTest.test-seed#[5C8A030049025509]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2 423273 INFO  

[jira] [Commented] (LUCENE-6743) Allow Ivy lockStrategy to be overridden by system property.

2015-08-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704840#comment-14704840
 ] 

Mark Miller commented on LUCENE-6743:
-

I don't mind if someone wants to make an issue to update Ivy, but I don't want 
to deal with the transition complication in this issue.

 Allow Ivy lockStrategy to be overridden by system property.
 ---

 Key: LUCENE-6743
 URL: https://issues.apache.org/jira/browse/LUCENE-6743
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6743.patch


 The current hard code lock strategy is imperfect and can fail under parallel 
 load. With Ivy 2.4 there is a better option in artifact-lock-nio. We should 
 allow the lock strategy to be overrideen like the resolutionCacheDir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-20 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6760:

Attachment: SOLR-6760-branch_5x.patch

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6760-branch_5x.patch, SOLR-6760.patch, 
 SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704944#comment-14704944
 ] 

ASF subversion and git services commented on SOLR-6760:
---

Commit 1696789 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1696789 ]

SOLR-6760: New optimized DistributedQueue implementation for overseer

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6760-branch_5x.patch, SOLR-6760.patch, 
 SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5756) A utility API to move collections from stateFormat=1 to stateFormat=2

2015-08-20 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5756.
-
Resolution: Fixed

That fix seems to have solved the problem. Good sleuthing, Scott!

 A utility API to move collections from stateFormat=1 to stateFormat=2
 -

 Key: SOLR-5756
 URL: https://issues.apache.org/jira/browse/SOLR-5756
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.4

 Attachments: CollectionStateFormat2Test-failure-r1695176.log, 
 CollectionStateFormat2Test-failure.log, 
 CollectionStateFormat2Test-passed-r1695176.log, SOLR-5756-fix.patch, 
 SOLR-5756-fix.patch, SOLR-5756-fix.patch-failure.log, SOLR-5756-part2.patch, 
 SOLR-5756-trunk.patch, SOLR-5756.patch, SOLR-5756.patch, SOLR-5756.patch, 
 sarowe-jenkins-Lucene-Solr-tests-trunk-1522-CollectionStateFormat2-failure.txt


 SOLR-5473 allows creation of collection with state stored outside of 
 clusterstate.json. We would need an API to  move existing 'internal' 
 collections outside



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6745) RAMInputStream.clone is not thread safe

2015-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705019#comment-14705019
 ] 

ASF subversion and git services commented on LUCENE-6745:
-

Commit 1696798 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1696798 ]

LUCENE-6745: RAMInputStream.clone was not thread safe (Mike McCandless)

 RAMInputStream.clone is not thread safe
 ---

 Key: LUCENE-6745
 URL: https://issues.apache.org/jira/browse/LUCENE-6745
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6745.patch


 This took some time to track down ... it's the root cause of the RangeTree 
 failures that [~steve_rowe] found at 
 https://issues.apache.org/jira/browse/LUCENE-6697?focusedCommentId=14696999page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14696999
 The problem happens when one thread is using the original IndexInput 
 (RAMInputStream) from a RAMDirectory, but other threads are also cloning  
 that IndexInput at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Unit testing our UIs

2015-08-20 Thread Jan Høydahl
Hi

We’re adding more and more UIs to Solr, and they have no unit tests (as far as 
I know). I could not find any discussions on this topic in the list archives, 
so thought to bring it up here.

I only know about Selenium, could be cool to write up some simple tests 
exercising key parts of the Admin UI in various browsers. Or?

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-20 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6760.
-
   Resolution: Fixed
Fix Version/s: 5.4
   Trunk

Thanks Scott!

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.4

 Attachments: SOLR-6760-branch_5x.patch, SOLR-6760.patch, 
 SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-20 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6760:

Component/s: SolrCloud
 Issue Type: Improvement  (was: Bug)

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.4

 Attachments: SOLR-6760-branch_5x.patch, SOLR-6760.patch, 
 SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6749) TestPerfTasksLogic failure

2015-08-20 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-6749:
--

 Summary: TestPerfTasksLogic failure
 Key: LUCENE-6749
 URL: https://issues.apache.org/jira/browse/LUCENE-6749
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/benchmark
Reporter: Steve Rowe


My Jenkins found a benchmark module failure that I can't reproduce on Linux  or 
OS X - I beasted 20 iterations with the seed on each platform:

{noformat}
   [junit4] Suite: org.apache.lucene.benchmark.byTask.TestPerfTasksLogic
   [junit4]   1  starting task: Seq
   [junit4]   1  starting task: Seq
   [junit4]   1  starting task: Rounds
   [junit4]   1 
   [junit4]   1  Report Sum By (any) Name (4 about 4 out of 5)
   [junit4]   1 Operation   round   runCnt   recsPerRunrec/s  
elapsedSecavgUsedMemavgTotalMem
   [junit4]   1 Rounds  01   20   163.93   
 0.1214,755,992514,850,816
   [junit4]   1 CreateIndex -  -  - 0 -  -   1 -  -  -  - 0 -  -  - 0.00 -  -  
 0.00 -  12,061,112 -  514,850,816
   [junit4]   1 AddDocs_Exhaust 01   20 1,333.33   
 0.0113,408,552514,850,816
   [junit4]   1 CloseIndex -  -  -  0 -  -   1 -  -  -  - 0 -  -  - 0.00 -  -  
 0.02 -  14,755,992 -  514,850,816
   [junit4]   1 
   [junit4]   1  starting task: Seq
   [junit4]   1  starting task: Rounds
   [junit4]   1 
   [junit4]   1  Report Sum By (any) Name (4 about 4 out of 5)
   [junit4]   1 Operation   round   runCnt   recsPerRunrec/s  
elapsedSecavgUsedMemavgTotalMem
   [junit4]   1 Rounds  01   22   164.18   
 0.1312,074,760514,850,816
   [junit4]   1 CreateIndex -  -  - 0 -  -   1 -  -  -  - 1 -   1,000.00 -  -  
 0.00 -  12,074,760 -  514,850,816
   [junit4]   1 AddDocs_Exhaust 01   20 4,000.00   
 0.0012,074,760514,850,816
   [junit4]   1 CloseIndex -  -  -  0 -  -   1 -  -  -  - 1 -  -   41.67 -  -  
 0.02 -  12,074,760 -  514,850,816
   [junit4]   1 
   [junit4]   1  starting task: Seq
   [junit4]   1 
   [junit4]   1  Report Sum By Prefix (X) (1 about 1 out of 1012)
   [junit4]   1 Operation round   runCnt   recsPerRunrec/s  
elapsedSecavgUsedMemavgTotalMem
   [junit4]   1 XSearch_2_Par 01 728914,433.66
0.5057,401,768514,850,816
   [junit4]   1 
   [junit4]   1  starting task: Seq
   [junit4]   10.36 sec -- 
TEST-TestPerfTasksLogic.testHighlightingNoTvNoStore-seed#[3DD556FDEBB99CC4] 
added  1000 docs
   [junit4]   1  starting task: Seq
   [junit4]   10.45 sec -- 
TEST-TestPerfTasksLogic.testHighlightingTV-seed#[3DD556FDEBB99CC4] added  
1000 docs
   [junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=TestPerfTasksLogic 
-Dtests.method=testHighlightingTV -Dtests.seed=3DD556FDEBB99CC4 
-Dtests.slow=true -Dtests.locale=zh_CN -Dtests.timezone=America/Campo_Grande 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   0.80s J0 | TestPerfTasksLogic.testHighlightingTV 
   [junit4] Throwable #1: java.lang.NullPointerException
   [junit4]at 
__randomizedtesting.SeedInfo.seed([3DD556FDEBB99CC4:5DF7708B74A6745D]:0)
   [junit4]at 
org.apache.lucene.benchmark.byTask.tasks.ReadTask.getFieldsToHighlight(ReadTask.java:300)
   [junit4]at 
org.apache.lucene.benchmark.byTask.tasks.SearchTravRetHighlightTask.getFieldsToHighlight(SearchTravRetHighlightTask.java:115)
   [junit4]at 
org.apache.lucene.benchmark.byTask.tasks.ReadTask.doLogic(ReadTask.java:169)
   [junit4]at 
org.apache.lucene.benchmark.byTask.tasks.PerfTask.runAndMaybeStats(PerfTask.java:146)
   [junit4]at 
org.apache.lucene.benchmark.byTask.tasks.TaskSequence.doSerialTasks(TaskSequence.java:197)
   [junit4]at 
org.apache.lucene.benchmark.byTask.tasks.TaskSequence.doLogic(TaskSequence.java:138)
   [junit4]at 
org.apache.lucene.benchmark.byTask.tasks.PerfTask.runAndMaybeStats(PerfTask.java:146)
   [junit4]at 
org.apache.lucene.benchmark.byTask.tasks.TaskSequence.doSerialTasks(TaskSequence.java:197)
   [junit4]at 
org.apache.lucene.benchmark.byTask.tasks.TaskSequence.doLogic(TaskSequence.java:138)
   [junit4]at 
org.apache.lucene.benchmark.byTask.tasks.PerfTask.runAndMaybeStats(PerfTask.java:146)
   [junit4]at 
org.apache.lucene.benchmark.byTask.utils.Algorithm.execute(Algorithm.java:332)
   [junit4]at 
org.apache.lucene.benchmark.byTask.Benchmark.execute(Benchmark.java:77)
   [junit4]at 
org.apache.lucene.benchmark.BenchmarkTestCase.execBenchmark(BenchmarkTestCase.java:75)
   [junit4]at 

[jira] [Created] (LUCENE-6750) TestMergeSchedulerExternal failure

2015-08-20 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-6750:
--

 Summary: TestMergeSchedulerExternal failure
 Key: LUCENE-6750
 URL: https://issues.apache.org/jira/browse/LUCENE-6750
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Steve Rowe


Policeman Jenkins found a failure on OS X 
[http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2649/] that I can't 
reproduce on OS X 10.10.4 using Oracle Java 1.8.0_20, even after beasting 200 
total suite iterations with the seed:

{noformat}
   [junit4] Suite: org.apache.lucene.TestMergeSchedulerExternal
   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestMergeSchedulerExternal 
-Dtests.method=testSubclassConcurrentMergeScheduler 
-Dtests.seed=3AF868F9E00E5EBA -Dtests.slow=true -Dtests.locale=ru 
-Dtests.timezone=Europe/London -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.37s J1 | 
TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler 
   [junit4] Throwable #1: java.lang.AssertionError
   [junit4]at 
__randomizedtesting.SeedInfo.seed([3AF868F9E00E5EBA:BD79D554E42E24BE]:0)
   [junit4]at 
org.apache.lucene.TestMergeSchedulerExternal.testSubclassConcurrentMergeScheduler(TestMergeSchedulerExternal.java:116)
   [junit4]at java.lang.Thread.run(Thread.java:745)
   [junit4]   2 NOTE: test params are: codec=Asserting(Lucene53): 
{id=PostingsFormat(name=Memory doPackFST= true)}, docValues:{}, 
sim=DefaultSimilarity, locale=ru, timezone=Europe/London
   [junit4]   2 NOTE: Mac OS X 10.8.5 x86_64/Oracle Corporation 1.8.0_51 
(64-bit)/cpus=3,threads=1,free=16232544,total=54853632
   [junit4]   2 NOTE: All tests run in this JVM: [TestDateSort, 
TestWildcardRandom, TestIndexWriterMergePolicy, TestPackedInts, 
TestSpansAdvanced, TestBooleanOr, TestParallelReaderEmptyIndex, 
TestFixedBitDocIdSet, TestIndexWriterDeleteByQuery, Test4GBStoredFields, 
TestMultiThreadTermVectors, TestIndexWriterConfig, TestToken, 
TestMergeSchedulerExternal]
   [junit4] Completed [21/401] on J1 in 0.39s, 2 tests, 1 failure  FAILURES!
{noformat} 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6699) Integrate lat/lon BKD and spatial3d

2015-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705017#comment-14705017
 ] 

ASF subversion and git services commented on LUCENE-6699:
-

Commit 1696797 from [~mikemccand] in branch 'dev/branches/lucene6699'
[ https://svn.apache.org/r1696797 ]

LUCENE-6699: fix math for WGS84 PlanetModel

 Integrate lat/lon BKD and spatial3d
 ---

 Key: LUCENE-6699
 URL: https://issues.apache.org/jira/browse/LUCENE-6699
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Michael McCandless
Assignee: Michael McCandless
 Attachments: Geo3DPacking.java, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, 
 LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch, LUCENE-6699.patch


 I'm opening this for discussion, because I'm not yet sure how to do
 this integration, because of my ignorance about spatial in general and
 spatial3d in particular :)
 Our BKD tree impl is very fast at doing lat/lon shape intersection
 (bbox, polygon, soon distance: LUCENE-6698) against previously indexed
 points.
 I think to integrate with spatial3d, we would first need to record
 lat/lon/z into doc values.  Somewhere I saw discussion about how we
 could stuff all 3 into a single long value with acceptable precision
 loss?  Or, we could use BinaryDocValues?  We need all 3 dims available
 to do the fast per-hit query time filtering.
 But, second: what do we index into the BKD tree?  Can we just index
 earth surface lat/lon, and then at query time is spatial3d able to
 give me an enclosing surface lat/lon bbox for a 3d shape?  Or
 ... must we index all 3 dimensions into the BKD tree (seems like this
 could be somewhat wasteful)?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6745) RAMInputStream.clone is not thread safe

2015-08-20 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-6745.

Resolution: Fixed

 RAMInputStream.clone is not thread safe
 ---

 Key: LUCENE-6745
 URL: https://issues.apache.org/jira/browse/LUCENE-6745
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6745.patch


 This took some time to track down ... it's the root cause of the RangeTree 
 failures that [~steve_rowe] found at 
 https://issues.apache.org/jira/browse/LUCENE-6697?focusedCommentId=14696999page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14696999
 The problem happens when one thread is using the original IndexInput 
 (RAMInputStream) from a RAMDirectory, but other threads are also cloning  
 that IndexInput at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6745) RAMInputStream.clone is not thread safe

2015-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705031#comment-14705031
 ] 

ASF subversion and git services commented on LUCENE-6745:
-

Commit 1696802 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1696802 ]

LUCENE-6745: RAMInputStream.clone was not thread safe (Mike McCandless)

 RAMInputStream.clone is not thread safe
 ---

 Key: LUCENE-6745
 URL: https://issues.apache.org/jira/browse/LUCENE-6745
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.4

 Attachments: LUCENE-6745.patch


 This took some time to track down ... it's the root cause of the RangeTree 
 failures that [~steve_rowe] found at 
 https://issues.apache.org/jira/browse/LUCENE-6697?focusedCommentId=14696999page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14696999
 The problem happens when one thread is using the original IndexInput 
 (RAMInputStream) from a RAMDirectory, but other threads are also cloning  
 that IndexInput at the same time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-20 Thread Scott Blum (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705029#comment-14705029
 ] 

Scott Blum commented on SOLR-6760:
--

Woohoo!

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Fix For: Trunk, 5.4

 Attachments: SOLR-6760-branch_5x.patch, SOLR-6760.patch, 
 SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7811) mapreduce contrib has an issue with morphlines lib relying on solr code from a standard release leading to runtime class mismatch errors.

2015-08-20 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7811?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-7811.
---
Resolution: Duplicate

 mapreduce contrib has an issue with morphlines lib relying on solr code from 
 a standard release leading to runtime class mismatch errors.
 -

 Key: SOLR-7811
 URL: https://issues.apache.org/jira/browse/SOLR-7811
 Project: Solr
  Issue Type: Bug
Reporter: Mark Miller
Assignee: Mark Miller





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6748) The query cache should not cache trivial queries

2015-08-20 Thread Terry Smith (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14705045#comment-14705045
 ] 

Terry Smith commented on LUCENE-6748:
-

I'd add a case to the patch to include empty DisjunctionMaxQuery instances also.


 The query cache should not cache trivial queries
 

 Key: LUCENE-6748
 URL: https://issues.apache.org/jira/browse/LUCENE-6748
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Attachments: LUCENE-6748.patch


 The query cache already avoids caching term queries because they are cheap, 
 but it doesn't do it with even cheaper queries like MatchAllDocsQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4212) Tests should not use new Random() without args

2015-08-20 Thread Lev Priima (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704775#comment-14704775
 ] 

Lev Priima commented on LUCENE-4212:


agree

 Tests should not use new Random() without args
 --

 Key: LUCENE-4212
 URL: https://issues.apache.org/jira/browse/LUCENE-4212
 Project: Lucene - Core
  Issue Type: Improvement
  Components: general/build
Reporter: Robert Muir
 Fix For: 4.0-ALPHA, Trunk

 Attachments: LUCENE-4212.patch, LUCENE-4212.patch, LUCENE-4212.patch, 
 LUCENE-4212.patch


 They should be using random() etc, and if they create one, it should pass in 
 a seed.
 Otherwise, they probably won't reproduce.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6748) The query cache should not cache trivial queries

2015-08-20 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6748:


 Summary: The query cache should not cache trivial queries
 Key: LUCENE-6748
 URL: https://issues.apache.org/jira/browse/LUCENE-6748
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


The query cache already avoids caching term queries because they are cheap, but 
it doesn't do it with even cheaper queries like MatchAllDocsQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7949) Thers is a xss issue in plugins/stats page of Admin Web UI.

2015-08-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704819#comment-14704819
 ] 

ASF subversion and git services commented on SOLR-7949:
---

Commit 1696782 from jan...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1696782 ]

SOLR-7949: Resolve XSS issue in Admin UI stats page

 Thers is a xss issue in plugins/stats page of Admin Web UI.
 ---

 Key: SOLR-7949
 URL: https://issues.apache.org/jira/browse/SOLR-7949
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.9, 4.10.4, 5.2.1
Reporter: davidchiu
Assignee: Jan Høydahl
 Fix For: Trunk, 5.4, 5.3.1


 Open Solr Admin Web UI, select a core(such as collection1) and then click 
 Plugins/stats,and type a url like 
 http://127.0.0.1:8983/solr/#/collection1/plugins/cache?entry=score=img 
 src=1 onerror=alert(1); to the browser address, you will get alert box with 
 1.
 I changed follow code to resolve this problem:
 The Original code:
   for( var i = 0; i  entry_count; i++ )
   {
 $( 'a[data-bean=' + entries[i] + ']', frame_element )
   .parent().addClass( 'expanded' );
   }
 The Changed code:
   for( var i = 0; i  entry_count; i++ )
   {
 $( 'a[data-bean=' + entries[i].esc() + ']', frame_element )
   .parent().addClass( 'expanded' );
   }



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6760) New optimized DistributedQueue implementation for overseer

2015-08-20 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6760:

Attachment: (was: SOLR-6760-branch_5x.patch)

 New optimized DistributedQueue implementation for overseer
 --

 Key: SOLR-6760
 URL: https://issues.apache.org/jira/browse/SOLR-6760
 Project: Solr
  Issue Type: Bug
Reporter: Noble Paul
Assignee: Shalin Shekhar Mangar
 Attachments: SOLR-6760-branch_5x.patch, SOLR-6760.patch, 
 SOLR-6760.patch, SOLR-6760.patch, SOLR-6760.patch, deadlock.patch


 Currently the DQ works as follows
 * read all items in the directory
 * sort them all 
 * take the head and return it and discard everything else
 * rinse and repeat
 This works well when we have only a handful of items in the Queue. If the 
 items in the queue is much larger (in tens of thousands) , this is 
 counterproductive
 As the overseer queue is a multiple producers + single consumer queue, We can 
 read them all in bulk  and before processing each item , just do a 
 zk.exists(itemname) and if all is well we don't need to do the fetch all + 
 sort thing again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7948) MapReduceIndexerTool of solr 5.2.1 doesn't work with hadoop 2.7.1

2015-08-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14704864#comment-14704864
 ] 

Mark Miller commented on SOLR-7948:
---

Thanks for the report.

I actually ran into this issue a couple weeks ago while trying to get the map 
reduce contrib back up to speed.

You can see this issue when you try and run the example: 
https://github.com/markrmiller/solr-map-reduce-example

I think the issue is that a Kite Morphlines jar is using a Solr class that has 
changed. If so, the answer is that Kite Morphlines should not use Solr classes 
outside of it's couple Solr modules. I'll look into getting that changed very 
soon, and then we will have to update versions.

 MapReduceIndexerTool of solr 5.2.1 doesn't work with hadoop 2.7.1
 -

 Key: SOLR-7948
 URL: https://issues.apache.org/jira/browse/SOLR-7948
 Project: Solr
  Issue Type: Bug
  Components: contrib - MapReduce
Affects Versions: 5.2.1
 Environment: OS:suse 11
 JDK:java version 1.7.0_65 
 Java(TM) SE Runtime Environment (build 1.7.0_65-b17)
 Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)
 HADOOP:hadoop 2.7.1 
 SOLR:5.2.1
Reporter: davidchiu

 When I used MapReduceIndexerTool of 5.2.1 to index files, I got follwoing 
 errors,but I used 4.9.0's MapReduceIndexerTool, it did work with hadoop 2.7.1.
 Exception ERROR as following:
 INFO  - 2015-08-20 11:44:45.155; [   ] org.apache.solr.hadoop.HeartBeater; 
 Heart beat reporting class is 
 org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
 INFO  - 2015-08-20 11:44:45.161; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Using this unpacked directory as 
 solr home: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
 INFO  - 2015-08-20 11:44:45.162; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Creating embedded Solr server with 
 solrHomeDir: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip,
  fs: 
 DFS[DFSClient[clientName=DFSClient_attempt_1440040092614_0004_r_01_0_1678264055_1,
  ugi=root (auth:SIMPLE)]], outputShardDir: 
 hdfs://127.0.0.1:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1
 INFO  - 2015-08-20 11:44:45.194; [   ] 
 org.apache.solr.core.SolrResourceLoader; new SolrResourceLoader for 
 directory: 
 '/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/'
 INFO  - 2015-08-20 11:44:45.206; [   ] org.apache.solr.hadoop.HeartBeater; 
 HeartBeat thread running
 INFO  - 2015-08-20 11:44:45.207; [   ] org.apache.solr.hadoop.HeartBeater; 
 Issuing heart beat for 1 threads
 INFO  - 2015-08-20 11:44:45.418; [   ] 
 org.apache.solr.hadoop.SolrRecordWriter; Constructed instance information 
 solr.home 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
  
 (/usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip),
  instance dir 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/,
  conf dir 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/conf/,
  writing index to solr.data.dir 
 hdfs://127.0.0.1:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1/data,
  with permdir 
 hdfs://127.0.0.10:9000/tmp/outdir/reducers/_temporary/1/_temporary/attempt_1440040092614_0004_r_01_0/part-r-1
 INFO  - 2015-08-20 11:44:45.426; [   ] org.apache.solr.core.SolrXmlConfig; 
 Loading container configuration from 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip/solr.xml
 INFO  - 2015-08-20 11:44:45.474; [   ] 
 org.apache.solr.core.CorePropertiesLocator; Config-defined core root 
 directory: 
 /usr/local/hadoop/tmp/nm-local-dir/usercache/root/appcache/application_1440040092614_0004/container_1440040092614_0004_01_04/82f2eca9-d6eb-483b-960f-0d3b3b93788c.solr.zip
 INFO  - 2015-08-20 11:44:45.503; [   ] org.apache.solr.core.CoreContainer; 
 New 

[jira] [Updated] (SOLR-7775) support SolrCloud collection as fromIndex param in query-time join

2015-08-20 Thread Andrei Beliakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Beliakov updated SOLR-7775:
--
Attachment: (was: SOLR-7775.patch)

 support SolrCloud collection as fromIndex param in query-time join
 --

 Key: SOLR-7775
 URL: https://issues.apache.org/jira/browse/SOLR-7775
 Project: Solr
  Issue Type: Sub-task
  Components: query parsers
Reporter: Mikhail Khludnev
Assignee: Mikhail Khludnev
 Fix For: 5.3

 Attachments: SOLR-7775.patch


 it's allusion to SOLR-4905, will be addressed right after SOLR-6234



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: VOTE: RC0 Release of apache-solr-ref-guide-5.3.pdf

2015-08-20 Thread Cassandra Targett
Steve, Mikhail, thanks for your reviews. I've addressed your feedback
separately below:

Steve,

Sorry if I wasn't clear about the images. There were two problems with
images in the PDF:

* Many images were not appearing at all.
* Some images that did appear overlapped the surrounding text instead of
the text flowing around the image.

I solved the first problem by re-attaching images to each page that had a
problem, specifically by deleting the image and re-inserting it into the
page.

For the second problem, I solved that by putting in a CSS rule to split
images across pages if necessary. In our conversation on IRC, I shared two
screenshots - one with the image overlapping the text, and another with the
image split - and you indicated the splitting was preferable to the text
overlapping. Splitting the image was the only solution I was able to find
to this problem.

Considering this, I think the only image problem that still exists is the
one that is too large to fit the page.

Mikhail,

The issue you noticed with the links in the PDF is known and there is,
sadly, no workaround (see the 2nd item on the list of PDF Format Fixes on
the TODO list:
https://cwiki.apache.org/confluence/display/solr/Internal+-+TODO+List). The
only solution is to not have inter-page links, but that's not really a
solution at all.

Thanks again -
Cassandra


On Thu, Aug 20, 2015 at 1:18 AM, Mikhail Khludnev 
mkhlud...@griddynamics.com wrote:

 I dropped both overcomplicated things. Hope it helps.

 On Thu, Aug 20, 2015 at 8:35 AM, Mikhail Khludnev 
 mkhlud...@griddynamics.com wrote:

 Cassandra,
 page 266  Join Query Parser/ Scoring has broken JIRA Macros, I'm going
 to replace it to url.
 page 198 has links but they are not local (don't refer to page), but url
 aka Nested Child Documents for searching with Block Join Query Parsers. Here
 I'm not sure how to do that.

 On Wed, Aug 19, 2015 at 7:23 PM, Cassandra Targett casstarg...@gmail.com
  wrote:

 Please VOTE to release the following as apache-solr-ref-guide-5.3.pdf.


 https://dist.apache.org/repos/dist/dev/lucene/solr/ref-guide/apache-solr-ref-guide-5.3-RC0/

 $ cat apache-solr-ref-guide-5.3-RC0/apache-solr-ref-guide-5.3.pdf.sha1

 076fa1cb986a8bc8ac873e65e6ef77a841336221  apache-solr-ref-guide-5.3.pdf


 Thanks,

 Cassandra




 --
 Sincerely yours
 Mikhail Khludnev
 Principal Engineer,
 Grid Dynamics

 http://www.griddynamics.com
 mkhlud...@griddynamics.com




 --
 Sincerely yours
 Mikhail Khludnev
 Principal Engineer,
 Grid Dynamics

 http://www.griddynamics.com
 mkhlud...@griddynamics.com



  1   2   >