[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 2953 - Failure!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/2953/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseParallelGC

133 tests failed.
FAILED:  org.apache.solr.client.solrj.SolrExampleXMLTest.testExpandComponent

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:62857/solr/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:62857/solr/collection1
at 
__randomizedtesting.SeedInfo.seed([71E87EEB3024D523:830EF858DF835929]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:896)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:859)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:874)
at 
org.apache.solr.client.solrj.SolrExampleTests.testExpandComponent(SolrExampleTests.java:1889)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailure

[JENKINS] Lucene-Solr-Tests-trunk-Java8 - Build # 711 - Failure

2015-12-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java8/711/

42 tests failed.
FAILED:  org.apache.solr.client.solrj.SolrExampleBinaryTest.testAugmentFields

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:46761/solr/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:46761/solr/collection1
at 
__randomizedtesting.SeedInfo.seed([33BAD0D9A030CE6D:37F1733BF83BCF22]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:896)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:859)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:874)
at 
org.apache.solr.client.solrj.SolrExampleTests.testAugmentFields(SolrExampleTests.java:477)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTes

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 15248 - Failure!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15248/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Error from server at http://127.0.0.1:41988/_fpo/awholynewcollection_0: 
Expected mime type application/octet-stream but got text/html.   
 
Error 500HTTP ERROR: 500 Problem 
accessing /_fpo/awholynewcollection_0/select. Reason: {msg=Error 
trying to proxy request for url: 
http://127.0.0.1:44216/_fpo/awholynewcollection_0/select,trace=org.apache.solr.common.SolrException:
 Error trying to proxy request for url: 
http://127.0.0.1:44216/_fpo/awholynewcollection_0/select  at 
org.apache.solr.servlet.HttpSolrCall.remoteQuery(HttpSolrCall.java:591)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:441)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:111)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
  at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:45)  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1158)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1090)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:437)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:119) 
 at org.eclipse.jetty.server.Server.handle(Server.java:517)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:242)  at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:261)
  at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)  at 
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:75) 
 at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:213)
  at 
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:147)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572) 
 at java.lang.Thread.run(Thread.java:745) Caused by: 
org.apache.http.conn.ConnectionPoolTimeoutException: Timeout waiting for 
connection from pool  at 
org.apache.http.impl.conn.PoolingClientConnectionManager.leaseConnection(PoolingClientConnectionManager.java:226)
  at 
org.apache.http.impl.conn.PoolingClientConnectionManager$1.getConnection(PoolingClientConnectionManager.java:195)
  at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:423)
  at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
  at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
  at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:107)
  at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
  at org.apache.solr.servlet.HttpSolrCall.remoteQuery(HttpSolrCall.java:558)  
... 28 more ,code=500} http://eclipse.org/jetty";>Powered by Jetty:// 9.3.6.v20151106 
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:41988/_fpo/awholynewcollection_0: Expected mime 
type application/octet-stream but got text/html. 


Error 500 


HTTP ERROR: 500
Problem accessing /_fpo/awholynewcollection_0/select. Reason:
{msg=Error trying to proxy request for url: 
http://127.0.0.1:44216/_fpo/awholynewcollection_0/select,trace=org.apache.solr.common.SolrException:
 Error trying to proxy request for url: 
http://127.0.0.1:44216/_fpo/awholynewcollection_0/select
at 
org.apache.solr.servlet.HttpSolrCall.remoteQuery(HttpSolrCall.java:591)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:441)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
at 
org.apache.solr.servlet.SolrDis

[jira] [Comment Edited] (SOLR-8096) Major faceting performance regressions

2015-12-18 Thread Jamie Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15065185#comment-15065185
 ] 

Jamie Johnson edited comment on SOLR-8096 at 12/19/15 5:12 AM:
---

While some (all?) of the performance issues are addressed, would it not still 
be useful to add an option to support either faceting approach?  I understand 
the benefits of DocValues but we have a case where the facets need to be 
calculated based on an access level the user has.  Simply storing in a separate 
field is not an option because the access controls are complex.  Given that the 
JSON Facet API allows developers to choose the faceting method it would seem 
reasonable to provide similar functionality here, no?  Perhaps support the 
original implementation as the approach when method is fc and add a dv method 
to support docvalues.  This would be inline with the new JSON API I believe, 
though from the looks of things it is not a trivial patch since the 
SimpleFacets seems pretty out of sync with the new faceting approach required 
in regards to using the UnInvertedField


was (Author: jej2003):
While some (all?) of the performance issues are addressed, would it not still 
be useful to add an option to support either faceting approach?  I understand 
the benefits of DocValues but we have a case where the facets need to be 
calculated based on an access level the user has.  Simply storing in a separate 
field is not an option because the access controls are complex.  Given that the 
JSON Facet API allows developers to choose the faceting method it would seem 
reasonable to provide similar functionality here, no?  It would seem a fairly 
trivial patch to support the original implementation as the approach when 
method is fc and add a dv method to support docvalues.  This would be inline 
with the new JSON API I believe.

> Major faceting performance regressions
> --
>
> Key: SOLR-8096
> URL: https://issues.apache.org/jira/browse/SOLR-8096
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3, Trunk
>Reporter: Yonik Seeley
>Priority: Critical
>
> Use of the highly optimized faceting that Solr had for multi-valued fields 
> over relatively static indexes was removed as part of LUCENE-5666, causing 
> severe performance regressions.
> Here are some quick benchmarks to gauge the damage, on a 5M document index, 
> with each field having between 0 and 5 values per document.  *Higher numbers 
> represent worse 5x performance*.
> Solr 5.4_dev faceting time as a percent of Solr 4.10.3 faceting time  
> ||...|| Percent of index being faceted
> ||num_unique_values|| 10% || 50% || 90% ||
> |10   | 351.17%   | 1587.08%  | 3057.28% |
> |100  | 158.10%   | 203.61%   | 1421.93% |
> |1000 | 143.78%   | 168.01%   | 1325.87% |
> |1| 137.98%   | 175.31%   | 1233.97% |
> |10   | 142.98%   | 159.42%   | 1252.45% |
> |100  | 255.15%   | 165.17%   | 1236.75% |
> For example, a field with 1000 unique values in the whole index, faceting 
> with 5x took 143% of the 4x time, when ~10% of the docs in the index were 
> faceted.
> One user who brought the performance problem to our attention: 
> http://markmail.org/message/ekmqh4ocbkwxv3we
> "faceting is unusable slow since upgrade to 5.3.0" (from 4.10.3)
> The disabling of the UnInvertedField algorithm was previously discovered in 
> SOLR-7190, but we didn't know just how bad the problem was at that time.
> edit: removed "secret" adverb by request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8096) Major faceting performance regressions

2015-12-18 Thread Jamie Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15065185#comment-15065185
 ] 

Jamie Johnson edited comment on SOLR-8096 at 12/19/15 3:31 AM:
---

While some (all?) of the performance issues are addressed, would it not still 
be useful to add an option to support either faceting approach?  I understand 
the benefits of DocValues but we have a case where the facets need to be 
calculated based on an access level the user has.  Simply storing in a separate 
field is not an option because the access controls are complex.  Given that the 
JSON Facet API allows developers to choose the faceting method it would seem 
reasonable to provide similar functionality here, no?  It would seem a fairly 
trivial patch to support the original implementation as the approach when 
method is fc and add a dv method to support docvalues.  This would be inline 
with the new JSON API I believe.


was (Author: jej2003):
While some (all?) of the performance issues are addressed, would it not still 
be useful to add an option to support either faceting approach?  I understand 
the benefits of DocValues but we have a case where the facets need to be 
calculated based on an access level the user has.  Simply storing in a separate 
field is not an option because the access controls are complex.  Given that the 
JSON Facet API allows developers to choose the faceting method it would seem 
reasonable to provide similar functionality here, no?

> Major faceting performance regressions
> --
>
> Key: SOLR-8096
> URL: https://issues.apache.org/jira/browse/SOLR-8096
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3, Trunk
>Reporter: Yonik Seeley
>Priority: Critical
>
> Use of the highly optimized faceting that Solr had for multi-valued fields 
> over relatively static indexes was removed as part of LUCENE-5666, causing 
> severe performance regressions.
> Here are some quick benchmarks to gauge the damage, on a 5M document index, 
> with each field having between 0 and 5 values per document.  *Higher numbers 
> represent worse 5x performance*.
> Solr 5.4_dev faceting time as a percent of Solr 4.10.3 faceting time  
> ||...|| Percent of index being faceted
> ||num_unique_values|| 10% || 50% || 90% ||
> |10   | 351.17%   | 1587.08%  | 3057.28% |
> |100  | 158.10%   | 203.61%   | 1421.93% |
> |1000 | 143.78%   | 168.01%   | 1325.87% |
> |1| 137.98%   | 175.31%   | 1233.97% |
> |10   | 142.98%   | 159.42%   | 1252.45% |
> |100  | 255.15%   | 165.17%   | 1236.75% |
> For example, a field with 1000 unique values in the whole index, faceting 
> with 5x took 143% of the 4x time, when ~10% of the docs in the index were 
> faceted.
> One user who brought the performance problem to our attention: 
> http://markmail.org/message/ekmqh4ocbkwxv3we
> "faceting is unusable slow since upgrade to 5.3.0" (from 4.10.3)
> The disabling of the UnInvertedField algorithm was previously discovered in 
> SOLR-7190, but we didn't know just how bad the problem was at that time.
> edit: removed "secret" adverb by request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8422) Basic Authentication plugin is not working correctly in solrcloud

2015-12-18 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15065201#comment-15065201
 ] 

Noble Paul commented on SOLR-8422:
--

just to confirm, all nodes are updated with the patch , right?

> Basic Authentication plugin is not working correctly in solrcloud
> -
>
> Key: SOLR-8422
> URL: https://issues.apache.org/jira/browse/SOLR-8422
> Project: Solr
>  Issue Type: Bug
>  Components: Authentication
>Affects Versions: 5.3.1
> Environment: Solrcloud
>Reporter: Nirmala Venkatraman
>Assignee: Noble Paul
> Attachments: SOLR-8422.patch
>
>
> Iam seeing a problem with basic auth on Solr5.3.1 . We have 5 node solrcloud 
> with basic auth configured on sgdsolar1/2/3/4/7 , listening on port 8984.  We 
> have 64 collections, each having 2 replicas distributed across the 5 servers 
> in the solr cloud. A sample screen shot of the collections/shard locations 
> shown below:-
> Step 1 - Our solr indexing tool sends a request  to say any one of the  solr 
> servers in the solrcloud and the request is sent to a server  which  doesn't 
> have the collection
> Here is the request sent by the indexing tool  to sgdsolar1, that includes 
> the correct BasicAuth credentials
> Step2 - Now sgdsolar1 routes  the request to sgdsolar2 that has the 
> collection1, but no basic auth header is being passed. 
> As a results sgdsolar2 throws a 401 error back to source server sgdsolar1 and 
> all the way back to solr indexing tool
> 9.32.182.53 - - [15/Dec/2015:00:45:18 +] "GET 
> /solr/collection1/get?_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&fl=unid,sequence,folderunid&wt=xml&rows=10
>  HTTP/1.1" 401 366
> 2015-12-15 00:45:18.112 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] 
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. 
> failed permission 
> org.apache.solr.security.RuleBasedAuthorizationPlugin$Permission@5ebe8fca
> 2015-12-15 00:45:18.113 INFO  (qtp1214753695-56) [c:collection1 s:shard1 
> r:core_node1 x:collection1_shard1_replica1] o.a.s.s.HttpSolrCall 
> USER_REQUIRED auth header null context : userPrincipal: [null] type: [READ], 
> collections: [collection1,], Path: [/get] path : /get params 
> :fl=unid,sequence,folderunid&ids=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!08D9EACCA5AE663400257EB6005A5CFF,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!9057B828F841C41F00257EB6005A7421,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!F3FB9305A00A0E1200257EB6005AAA99,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!E9815A6F3CBC3D0E00257EB6005ACA02,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!FEB43AC9F648AFC500257EB6005AE4EB,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!4CF37E73A18F9D9F00257E590016CBD9,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!61D5457FEA1EBE5C00257E5900188729,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!6B0D89B9A7EEBC4600257E590019CEDA,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!360B9B52D9C6DFE400257EB2007FCD8B,Q049c2dkbWFpbDMwL089U0dfVVMx20093510!D86D4CED01F66AF300257EB2008305A4&rows=10&wt=xml&_route_=Q049c2dkbWFpbDMwL089U0dfVVMx20093510!
> Step 3 - In another solrcloud , if the indexing tool sends the solr get 
> request to the server that has the collection1, I see that basic 
> authentication working as expected.
> I double checked and see both sgdsolar1/sgdsolar2 servers have the patched 
> solr-core and solr-solrj jar files under the solr-webapp folder that were 
> provided via earlier patches that Anshum/Noble worked on:-
> SOLR-8167 fixes the POST issue 
> SOLR-8326  fixing PKIAuthenticationPlugin.
> SOLR-8355



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8096) Major faceting performance regressions

2015-12-18 Thread Jamie Johnson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15065185#comment-15065185
 ] 

Jamie Johnson commented on SOLR-8096:
-

While some (all?) of the performance issues are addressed, would it not still 
be useful to add an option to support either faceting approach?  I understand 
the benefits of DocValues but we have a case where the facets need to be 
calculated based on an access level the user has.  Simply storing in a separate 
field is not an option because the access controls are complex.  Given that the 
JSON Facet API allows developers to choose the faceting method it would seem 
reasonable to provide similar functionality here, no?

> Major faceting performance regressions
> --
>
> Key: SOLR-8096
> URL: https://issues.apache.org/jira/browse/SOLR-8096
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.0, 5.1, 5.2, 5.3, Trunk
>Reporter: Yonik Seeley
>Priority: Critical
>
> Use of the highly optimized faceting that Solr had for multi-valued fields 
> over relatively static indexes was removed as part of LUCENE-5666, causing 
> severe performance regressions.
> Here are some quick benchmarks to gauge the damage, on a 5M document index, 
> with each field having between 0 and 5 values per document.  *Higher numbers 
> represent worse 5x performance*.
> Solr 5.4_dev faceting time as a percent of Solr 4.10.3 faceting time  
> ||...|| Percent of index being faceted
> ||num_unique_values|| 10% || 50% || 90% ||
> |10   | 351.17%   | 1587.08%  | 3057.28% |
> |100  | 158.10%   | 203.61%   | 1421.93% |
> |1000 | 143.78%   | 168.01%   | 1325.87% |
> |1| 137.98%   | 175.31%   | 1233.97% |
> |10   | 142.98%   | 159.42%   | 1252.45% |
> |100  | 255.15%   | 165.17%   | 1236.75% |
> For example, a field with 1000 unique values in the whole index, faceting 
> with 5x took 143% of the 4x time, when ~10% of the docs in the index were 
> faceted.
> One user who brought the performance problem to our attention: 
> http://markmail.org/message/ekmqh4ocbkwxv3we
> "faceting is unusable slow since upgrade to 5.3.0" (from 4.10.3)
> The disabling of the UnInvertedField algorithm was previously discovered in 
> SOLR-7190, but we didn't know just how bad the problem was at that time.
> edit: removed "secret" adverb by request



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-5.x-Linux (64bit/jdk-9-ea+95) - Build # 14949 - Failure!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/14949/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseG1GC -XX:-CompactStrings

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=8950, 
name=zkCallback-1667-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)2) Thread[id=8949, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[4AE87735049CCBE8]-EventThread,
 state=WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:178) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2061)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
3) Thread[id=9211, name=zkCallback-1667-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=9210, 
name=zkCallback-1667-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=8948, 
name=TEST-CollectionsAPIDistributedZkTest.test-seed#[4AE87735049CCBE8]-SendThread(127.0.0.1:48330),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:230)  
   at 
org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1185)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1110)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 
   1) Thread[id=8950, name=zkCallback-1667-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest]
at jdk.internal.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218)
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:461)
at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1082)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143)
at 
java.util.concu

[jira] [Updated] (SOLR-8446) Allow failonerror to be configured for unit tests

2015-12-18 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-8446:
-
Attachment: SOLR-8446.patch

Here's a trivial patch that makes failonerror configurable for the top-level 
"test" task.

This works for the use case above but depending on what you want to happen, you 
need to understand the ant structure a bit.  For example:
1) If the test task fails, no test report will be generated.  So you may have 
to specify -Dtests.ifNoTests=ignore as well or the task will still fail.  If 
all the lucene tests pass and the solr tests fail (or visa versa), the task 
will succeed even without specifying -Dtests.ifNoTests=ignore because the 
passing subdir will generate the report.
2) This setting affects the entire task, so calling "ant test" can pass even if 
say, compilation is broken.  You may want to specify something like "ant 
compile compile-test test" or similar to avoid this.

> Allow failonerror to be configured for unit tests
> -
>
> Key: SOLR-8446
> URL: https://issues.apache.org/jira/browse/SOLR-8446
> Project: Solr
>  Issue Type: Improvement
>  Components: Tests
>Reporter: Gregory Chanan
>Assignee: Gregory Chanan
> Attachments: SOLR-8446.patch
>
>
> Currently, failonerror is hard coded to false for the "test" task at the top 
> level scope.  For jenkins runs, it would be useful to be able to configure 
> this because:
> 1) unit tests runs are flaky
> 2) jenkins can detect test failures even if the the test task itself passes 
> and mark the build yellow (which happens if failonerror is true)
> Therefore, this allows some nicer visualization of the jenkins history, i.e.:
> green if everything is good
> yellow if unit tests are failing (most likely flaky)
> red if compile / precommit, etc are broken



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8446) Allow failonerror to be configured for unit tests

2015-12-18 Thread Gregory Chanan (JIRA)
Gregory Chanan created SOLR-8446:


 Summary: Allow failonerror to be configured for unit tests
 Key: SOLR-8446
 URL: https://issues.apache.org/jira/browse/SOLR-8446
 Project: Solr
  Issue Type: Improvement
  Components: Tests
Reporter: Gregory Chanan
Assignee: Gregory Chanan


Currently, failonerror is hard coded to false for the "test" task at the top 
level scope.  For jenkins runs, it would be useful to be able to configure this 
because:
1) unit tests runs are flaky
2) jenkins can detect test failures even if the the test task itself passes and 
mark the build yellow (which happens if failonerror is true)

Therefore, this allows some nicer visualization of the jenkins history, i.e.:
green if everything is good
yellow if unit tests are failing (most likely flaky)
red if compile / precommit, etc are broken



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 270 - Still Failing!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/270/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.handler.clustering.DistributedClusteringComponentTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:46327//collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:46327//collection1
at 
__randomizedtesting.SeedInfo.seed([97A5764727156BDD:1FF1499D89E90625]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:896)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:859)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:874)
at 
org.apache.solr.BaseDistributedSearchTestCase.del(BaseDistributedSearchTestCase.java:545)
at 
org.apache.solr.handler.clustering.DistributedClusteringComponentTest.test(DistributedClusteringComponentTest.java:36)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.jav

[jira] [Closed] (SOLR-8439) Solr Security - Permission read does not work as expected

2015-12-18 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl closed SOLR-8439.
-
Resolution: Duplicate

Fixed in SOLR-8167. Please upgrade to 5.4.
Marking as duplicate.

> Solr Security - Permission read does not work as expected
> -
>
> Key: SOLR-8439
> URL: https://issues.apache.org/jira/browse/SOLR-8439
> Project: Solr
>  Issue Type: Bug
>  Components: security
>Affects Versions: 5.3.1
> Environment: Linux, Solr Cloud
>Reporter: Gaurav Kumar
>Priority: Critical
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> I enabled security on my solr cloud and added basic authentication and 
> authorization to allow only specific users to read and update the records. 
> What I observed that update works fine but read does not stop from anonymous 
> access. 
> On digging deeper I saw that RuleBasedAuthorizationPlugin.java has 
> incorrectly defined the read permissions as follows:
> read :{" +
>   "  path:['/update/*', '/get']}," +
> It should be /select/* rather than /update/*



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6941) TestStressIndexing2.testMultiConfig() failure: all instances of a given field name must have the same term vectors settings

2015-12-18 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-6941:
--

 Summary: TestStressIndexing2.testMultiConfig() failure: all 
instances of a given field name must have the same term vectors settings
 Key: LUCENE-6941
 URL: https://issues.apache.org/jira/browse/LUCENE-6941
 Project: Lucene - Core
  Issue Type: Bug
Affects Versions: 5.5
Reporter: Steve Rowe


ASF Jenkins found this failure: 
[https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1050/].  I was able 
to reproduce by beasting (1/10 dups failed on the first beast iteration):

{noformat}
[junit4] Suite: org.apache.lucene.index.TestStressIndexing2
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestStressIndexing2 
-Dtests.method=testMultiConfig -Dtests.seed=337C1E3DCE453481 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=hr_HR -Dtests.timezone=Asia/Shanghai -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   1.92s J1 | TestStressIndexing2.testMultiConfig <<<
   [junit4]> Throwable #1: java.lang.IllegalArgumentException: all 
instances of a given field name must have the same term vectors settings 
(storeTermVectorOffsets changed for field="f0")
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([337C1E3DCE453481:FEEE68E563132BCF]:0)
   [junit4]>at 
org.apache.lucene.index.TermVectorsConsumerPerField.start(TermVectorsConsumerPerField.java:170)
   [junit4]>at 
org.apache.lucene.index.TermsHashPerField.start(TermsHashPerField.java:292)
   [junit4]>at 
org.apache.lucene.index.FreqProxTermsWriterPerField.start(FreqProxTermsWriterPerField.java:74)
   [junit4]>at 
org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:611)
   [junit4]>at 
org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:344)
   [junit4]>at 
org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:300)
   [junit4]>at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:234)
   [junit4]>at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:450)
   [junit4]>at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1477)
   [junit4]>at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1256)
   [junit4]>at 
org.apache.lucene.index.TestStressIndexing2.indexSerial(TestStressIndexing2.java:250)
   [junit4]>at 
org.apache.lucene.index.TestStressIndexing2.testMultiConfig(TestStressIndexing2.java:106)
   [junit4]>at java.lang.Thread.run(Thread.java:745)
   [junit4]>Suppressed: java.lang.IllegalStateException: close() 
called in wrong state: RESET
   [junit4]>at 
org.apache.lucene.analysis.MockTokenizer.fail(MockTokenizer.java:126)
   [junit4]>at 
org.apache.lucene.analysis.MockTokenizer.close(MockTokenizer.java:293)
   [junit4]>at 
org.apache.lucene.analysis.TokenFilter.close(TokenFilter.java:58)
   [junit4]>at 
org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:687)
   [junit4]>... 44 more
   [junit4]   2> NOTE: leaving temporary files on disk at: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-5.x/lucene/build/core/test/J1/temp/lucene.index.TestStressIndexing2_337C1E3DCE453481-001
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene54): 
{f25=FSTOrd50, f32=FSTOrd50, 
id=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
f19=Lucene50(blocksize=128), f68=Lucene50(blocksize=128), f43=FSTOrd50, 
f15=Lucene50(blocksize=128), f14=FSTOrd50, f95=Lucene50(blocksize=128), 
f33=Lucene50(blocksize=128), f65=FSTOrd50, f93=Lucene50(blocksize=128), 
f77=Lucene50(blocksize=128), 
f12=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
f71=Lucene50(blocksize=128), f83=FSTOrd50, f91=Lucene50(blocksize=128), 
f2=Lucene50(blocksize=128), f40=Lucene50(blocksize=128), f10=FSTOrd50, 
f28=Lucene50(blocksize=128), 
f92=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
f81=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
f57=Lucene50(blocksize=128), f22=Lucene50(blocksize=128), f58=FSTOrd50, 
f23=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
f13=Lucene50(blocksize=128), f46=Lucene50(blocksize=128), 
f48=Lucene50(blocksize=128), f82=Lucene50(blocksize=128), f36=FSTOrd50, 
f52=PostingsFormat(name=LuceneVarGapDocFreqInterval), 
f31=Lucene50(blocksize=128), 
f27=PostingsFormat(name=LuceneVarGapDocFreqInterval), f90=FSTOrd50, 
f69=FSTOrd50, f80=Luce

Re: Null Commit Mails from Buildbot for website

2015-12-18 Thread Erick Erickson
Works for me.

On Fri, Dec 18, 2015 at 3:48 PM, Upayavira  wrote:
> We regularly get emails such as the below from buildbot. They make
> reviewing commits in the commit mailing list hard, because there's so
> much junk there.
>
> Doing some digging, it seems that our buildbot setup uses svnmucc to
> push changed files up to SVN, however, svnmucc seems to create a commit
> whether anything has changed or not.
>
> The buildbot job is three stages: svn up, build site, upload site.
>
> We could prevent these messages by making the second and third steps
> "dependent" upon the first. In which case, they won't occur if no files
> are changed.
>
> Any objections to doing this?
>
> Upayavira
>
> - Original message -
> From: build...@apache.org
> To: comm...@lucene.apache.org
> Subject: svn commit: r975896 - in /websites: production/lucene/content/
> production/lucene/content/core/ production/lucene/content/solr/
> staging/lucene/trunk/content/ staging/lucene/trunk/content/core/
> staging/lucene/trunk/content/solr/
> Date: Fri, 18 Dec 2015 20:21:05 -
>
> Author: buildbot
> Date: Fri Dec 18 20:21:05 2015
> New Revision: 975896
>
> Log:
> Dynamic update by buildbot for lucene
>
> Modified:
> websites/production/lucene/content/core/index.html
> websites/production/lucene/content/index.html
> websites/production/lucene/content/solr/index.html
> websites/staging/lucene/trunk/content/core/index.html
> websites/staging/lucene/trunk/content/index.html
> websites/staging/lucene/trunk/content/solr/index.html
>
> Modified: websites/production/lucene/content/core/index.html
> ==
> (empty)
>
> Modified: websites/production/lucene/content/index.html
> ==
> (empty)
>
> Modified: websites/production/lucene/content/solr/index.html
> ==
> (empty)
>
> Modified: websites/staging/lucene/trunk/content/core/index.html
> ==
> (empty)
>
> Modified: websites/staging/lucene/trunk/content/index.html
> ==
> (empty)
>
> Modified: websites/staging/lucene/trunk/content/solr/index.html
> ==
> (empty)
>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2897 - Failure!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2897/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  org.apache.solr.cloud.TestAuthenticationFramework.testBasics

Error Message:
Error reading cluster properties

Stack Trace:
org.apache.solr.common.SolrException: Error reading cluster properties
at 
__randomizedtesting.SeedInfo.seed([158BA1385DBE60BD:28530F1465503ECD]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getClusterProps(ZkStateReader.java:738)
at 
org.apache.solr.common.cloud.ZkStateReader.getBaseUrlForNodeName(ZkStateReader.java:832)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:999)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:871)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:807)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:150)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:167)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.getRequestState(AbstractFullDistribZkTestBase.java:1904)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.getRequestStateAfterCompletion(AbstractFullDistribZkTestBase.java:1885)
at 
org.apache.solr.cloud.TestMiniSolrCloudCluster.testCollectionCreateSearchDelete(TestMiniSolrCloudCluster.java:143)
at 
org.apache.solr.cloud.TestAuthenticationFramework.testBasics(TestAuthenticationFramework.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
or

Null Commit Mails from Buildbot for website

2015-12-18 Thread Upayavira
We regularly get emails such as the below from buildbot. They make
reviewing commits in the commit mailing list hard, because there's so
much junk there.

Doing some digging, it seems that our buildbot setup uses svnmucc to
push changed files up to SVN, however, svnmucc seems to create a commit
whether anything has changed or not.

The buildbot job is three stages: svn up, build site, upload site.

We could prevent these messages by making the second and third steps
"dependent" upon the first. In which case, they won't occur if no files
are changed.

Any objections to doing this?

Upayavira

- Original message -
From: build...@apache.org
To: comm...@lucene.apache.org
Subject: svn commit: r975896 - in /websites: production/lucene/content/
production/lucene/content/core/ production/lucene/content/solr/
staging/lucene/trunk/content/ staging/lucene/trunk/content/core/
staging/lucene/trunk/content/solr/
Date: Fri, 18 Dec 2015 20:21:05 -

Author: buildbot
Date: Fri Dec 18 20:21:05 2015
New Revision: 975896

Log:
Dynamic update by buildbot for lucene

Modified:
websites/production/lucene/content/core/index.html
websites/production/lucene/content/index.html
websites/production/lucene/content/solr/index.html
websites/staging/lucene/trunk/content/core/index.html
websites/staging/lucene/trunk/content/index.html
websites/staging/lucene/trunk/content/solr/index.html

Modified: websites/production/lucene/content/core/index.html
==
(empty)

Modified: websites/production/lucene/content/index.html
==
(empty)

Modified: websites/production/lucene/content/solr/index.html
==
(empty)

Modified: websites/staging/lucene/trunk/content/core/index.html
==
(empty)

Modified: websites/staging/lucene/trunk/content/index.html
==
(empty)

Modified: websites/staging/lucene/trunk/content/solr/index.html
==
(empty)



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 1050 - Still Failing

2015-12-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/1050/

1 tests failed.
FAILED:  org.apache.lucene.index.TestStressIndexing2.testMultiConfig

Error Message:
all instances of a given field name must have the same term vectors settings 
(storeTermVectorOffsets changed for field="f0")

Stack Trace:
java.lang.IllegalArgumentException: all instances of a given field name must 
have the same term vectors settings (storeTermVectorOffsets changed for 
field="f0")
at 
__randomizedtesting.SeedInfo.seed([337C1E3DCE453481:FEEE68E563132BCF]:0)
at 
org.apache.lucene.index.TermVectorsConsumerPerField.start(TermVectorsConsumerPerField.java:170)
at 
org.apache.lucene.index.TermsHashPerField.start(TermsHashPerField.java:292)
at 
org.apache.lucene.index.FreqProxTermsWriterPerField.start(FreqProxTermsWriterPerField.java:74)
at 
org.apache.lucene.index.DefaultIndexingChain$PerField.invert(DefaultIndexingChain.java:611)
at 
org.apache.lucene.index.DefaultIndexingChain.processField(DefaultIndexingChain.java:344)
at 
org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:300)
at 
org.apache.lucene.index.DocumentsWriterPerThread.updateDocument(DocumentsWriterPerThread.java:234)
at 
org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:450)
at 
org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1477)
at 
org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1256)
at 
org.apache.lucene.index.TestStressIndexing2.indexSerial(TestStressIndexing2.java:250)
at 
org.apache.lucene.index.TestStressIndexing2.testMultiConfig(TestStressIndexing2.java:106)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene

[jira] [Commented] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2015-12-18 Thread Arcadius Ahouansou (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064864#comment-15064864
 ] 

Arcadius Ahouansou commented on SOLR-7865:
--

Thank you very much [~mikemccand] for your valuable help!

> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Michael McCandless
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read

2015-12-18 Thread Arcadius Ahouansou (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arcadius Ahouansou updated SOLR-8146:
-
Comment: was deleted

(was: Thank you very much [~mikemccand] for your help!)

> Allowing SolrJ CloudSolrClient to have preferred replica for query/read
> ---
>
> Key: SOLR-8146
> URL: https://issues.apache.org/jira/browse/SOLR-8146
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 5.3
>Reporter: Arcadius Ahouansou
> Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch
>
>
> h2. Backgrouds
> Currently, the CloudSolrClient randomly picks a replica to query.
> This is done by shuffling the list of live URLs to query then, picking the 
> first item from the list.
> This ticket is to allow more flexibility and control to some extend which 
> URLs will be picked up for queries.
> Note that this is for queries only and would not affect update/delete/admin 
> operations.
> h2. Implementation
> The current patch uses regex pattern and moves to the top of the list of URLs 
> only those matching the given regex specified by the system property 
> {code}solr.preferredQueryNodePattern{code}
> Initially, I thought it may be good to have Solr nodes tagged with a string 
> pattern (snitch?) and use that pattern for matching the URLs.
> Any comment, recommendation or feedback would be appreciated.
> h2. Use Cases
> There are many cases where the ability to choose the node where queries go 
> can be very handy:
> h3. Special node for manual user queries and analytics
> One may have a SolrCLoud cluster where every node host the same set of 
> collections with:  
> - multiple large SolrCLoud nodes (L) used for production apps and 
> - have 1 small node (S) in the same cluster with less ram/cpu used only for 
> manual user queries, data export and other production issue investigation.
> This ticket would allow to configure the applications using SolrJ to query 
> only the (L) nodes
> This use case is similar to the one described in SOLR-5501 raised by [~manuel 
> lenormand]
> h3. Minimizing network traffic
>  
> For simplicity, let's say that we have  a SolrSloud cluster deployed on 2 (or 
> N) separate racks: rack1 and rack2.
> On each rack, we have a set of SolrCloud VMs as well as a couple of client 
> VMs querying solr using SolrJ.
> All solr nodes are identical and have the same number of collections.
> What we would like to achieve is:
> - clients on rack1 will by preference query only SolrCloud nodes on rack1, 
> and 
> - clients on rack2 will by preference query only SolrCloud nodes on rack2.
> - Cross-rack read will happen if and only if one of the racks has no 
> available Solr node to serve a request.
> In other words, we want read operations to be local to a rack whenever 
> possible.
> Note that write/update/delete/admin operations should not be affected.
> Note that in our use case, we have a cross DC deployment. So, replace 
> rack1/rack2 by DC1/DC2
> Any comment would be very appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read

2015-12-18 Thread Arcadius Ahouansou (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064862#comment-15064862
 ] 

Arcadius Ahouansou commented on SOLR-8146:
--

Thank you very much [~mikemccand] for your help!

> Allowing SolrJ CloudSolrClient to have preferred replica for query/read
> ---
>
> Key: SOLR-8146
> URL: https://issues.apache.org/jira/browse/SOLR-8146
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 5.3
>Reporter: Arcadius Ahouansou
> Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch
>
>
> h2. Backgrouds
> Currently, the CloudSolrClient randomly picks a replica to query.
> This is done by shuffling the list of live URLs to query then, picking the 
> first item from the list.
> This ticket is to allow more flexibility and control to some extend which 
> URLs will be picked up for queries.
> Note that this is for queries only and would not affect update/delete/admin 
> operations.
> h2. Implementation
> The current patch uses regex pattern and moves to the top of the list of URLs 
> only those matching the given regex specified by the system property 
> {code}solr.preferredQueryNodePattern{code}
> Initially, I thought it may be good to have Solr nodes tagged with a string 
> pattern (snitch?) and use that pattern for matching the URLs.
> Any comment, recommendation or feedback would be appreciated.
> h2. Use Cases
> There are many cases where the ability to choose the node where queries go 
> can be very handy:
> h3. Special node for manual user queries and analytics
> One may have a SolrCLoud cluster where every node host the same set of 
> collections with:  
> - multiple large SolrCLoud nodes (L) used for production apps and 
> - have 1 small node (S) in the same cluster with less ram/cpu used only for 
> manual user queries, data export and other production issue investigation.
> This ticket would allow to configure the applications using SolrJ to query 
> only the (L) nodes
> This use case is similar to the one described in SOLR-5501 raised by [~manuel 
> lenormand]
> h3. Minimizing network traffic
>  
> For simplicity, let's say that we have  a SolrSloud cluster deployed on 2 (or 
> N) separate racks: rack1 and rack2.
> On each rack, we have a set of SolrCloud VMs as well as a couple of client 
> VMs querying solr using SolrJ.
> All solr nodes are identical and have the same number of collections.
> What we would like to achieve is:
> - clients on rack1 will by preference query only SolrCloud nodes on rack1, 
> and 
> - clients on rack2 will by preference query only SolrCloud nodes on rack2.
> - Cross-rack read will happen if and only if one of the racks has no 
> available Solr node to serve a request.
> In other words, we want read operations to be local to a rack whenever 
> possible.
> Note that write/update/delete/admin operations should not be affected.
> Note that in our use case, we have a cross DC deployment. So, replace 
> rack1/rack2 by DC1/DC2
> Any comment would be very appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7525) Add ComplementStream to the Streaming API and Streaming Expressions

2015-12-18 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064830#comment-15064830
 ] 

Dennis Gove edited comment on SOLR-7525 at 12/18/15 10:06 PM:
--

Rebases off of trunk and adds a DistinctOperation for use in the ReducerStream. 
The DistinctOperation ensures that for any given group only a single tuple will 
be returned. Currently it is implemented to return the first tuple in a group 
but a possible enhancement down the road could be to support a parameter asking 
for some other tuple in the group (such as the first in a sub-sorted list).

Also, while implementing this I realized that the UniqueStream can be 
refactored to be just a type of ReducerStream with DistinctOperation. -That 
change is not included in this patch but will be done under a separate ticket.-

Also of note, I'm not sure if the getChildren() function declared in 
TupleStream is necessary any longer. If I recall correctly that function was 
used by the StreamHandler when passing streams to workers but since all that 
has been changed to pass the result of toExpression()  I think we can get 
rid of the getChildren() function. I will explore that possibility.


was (Author: dpgove):
Rebases off of trunk and adds a DistinctOperation for use in the ReducerStream. 
The DistinctOperation ensures that for any given group only a single tuple will 
be returned. Currently it is implemented to return the first tuple in a group 
but a possible enhancement down the road could be to support a parameter asking 
for some other tuple in the group (such as the first in a sub-sorted list).

Also, while implementing this I realized that the UniqueStream can be 
refactored to be just a type of ReducerStream with DistinctOperation. That 
change is not included in this patch but will be done under a separate ticket.

Also of note, I'm not sure if the getChildren() function declared in 
TupleStream is necessary any longer. If I recall correctly that function was 
used by the StreamHandler when passing streams to workers but since all that 
has been changed to pass the result of toExpression()  I think we can get 
rid of the getChildren() function. I will explore that possibility.

> Add ComplementStream to the Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-7525
> URL: https://issues.apache.org/jira/browse/SOLR-7525
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7525.patch, SOLR-7525.patch, SOLR-7525.patch
>
>
> This ticket adds a ComplementStream to the Streaming API and Streaming 
> Expression language.
> The ComplementStream will wrap two TupleStreams (StreamA, StreamB) and emit 
> Tuples from StreamA that are not in StreamB.
> Streaming API Syntax:
> {code}
> ComplementStream cstream = new ComplementStream(streamA, streamB, comp);
> {code}
> Streaming Expression syntax:
> {code}
> complement(search(...), search(...), on(...))
> {code}
> Internal implementation will rely on the ReducerStream. The ComplementStream 
> can be parallelized using the ParallelStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7525) Add ComplementStream to the Streaming API and Streaming Expressions

2015-12-18 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-7525:
--
Attachment: SOLR-7525.patch

As it turns out IntersectStream and ComplementStream can both make use of a 
UniqueStream which makes use of a ReducerStream. As such this new patch 
implements Intersect and Complement with streamB as an instance of 
UniqueStream. UniqueStream is changed to be implemented as a type of 
ReducerStream.

> Add ComplementStream to the Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-7525
> URL: https://issues.apache.org/jira/browse/SOLR-7525
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7525.patch, SOLR-7525.patch, SOLR-7525.patch
>
>
> This ticket adds a ComplementStream to the Streaming API and Streaming 
> Expression language.
> The ComplementStream will wrap two TupleStreams (StreamA, StreamB) and emit 
> Tuples from StreamA that are not in StreamB.
> Streaming API Syntax:
> {code}
> ComplementStream cstream = new ComplementStream(streamA, streamB, comp);
> {code}
> Streaming Expression syntax:
> {code}
> complement(search(...), search(...), on(...))
> {code}
> Internal implementation will rely on the ReducerStream. The ComplementStream 
> can be parallelized using the ParallelStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7525) Add ComplementStream to the Streaming API and Streaming Expressions

2015-12-18 Thread Dennis Gove (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dennis Gove updated SOLR-7525:
--
Attachment: SOLR-7525.patch

Rebases off of trunk and adds a DistinctOperation for use in the ReducerStream. 
The DistinctOperation ensures that for any given group only a single tuple will 
be returned. Currently it is implemented to return the first tuple in a group 
but a possible enhancement down the road could be to support a parameter asking 
for some other tuple in the group (such as the first in a sub-sorted list).

Also, while implementing this I realized that the UniqueStream can be 
refactored to be just a type of ReducerStream with DistinctOperation. That 
change is not included in this patch but will be done under a separate ticket.

Also of note, I'm not sure if the getChildren() function declared in 
TupleStream is necessary any longer. If I recall correctly that function was 
used by the StreamHandler when passing streams to workers but since all that 
has been changed to pass the result of toExpression()  I think we can get 
rid of the getChildren() function. I will explore that possibility.

> Add ComplementStream to the Streaming API and Streaming Expressions
> ---
>
> Key: SOLR-7525
> URL: https://issues.apache.org/jira/browse/SOLR-7525
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-7525.patch, SOLR-7525.patch
>
>
> This ticket adds a ComplementStream to the Streaming API and Streaming 
> Expression language.
> The ComplementStream will wrap two TupleStreams (StreamA, StreamB) and emit 
> Tuples from StreamA that are not in StreamB.
> Streaming API Syntax:
> {code}
> ComplementStream cstream = new ComplementStream(streamA, streamB, comp);
> {code}
> Streaming Expression syntax:
> {code}
> complement(search(...), search(...), on(...))
> {code}
> Internal implementation will rely on the ReducerStream. The ComplementStream 
> can be parallelized using the ParallelStream.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-8443) Change /stream handler http param from "stream" to "expr"

2015-12-18 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein closed SOLR-8443.

Resolution: Fixed

> Change /stream handler http param from "stream" to "expr"
> -
>
> Key: SOLR-8443
> URL: https://issues.apache.org/jira/browse/SOLR-8443
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8443.patch
>
>
> When passing in a Streaming Expression to the /stream handler you currently 
> use the "stream" http parameter. This dates back to when serialized 
> TupleStream objects were passed in. Now that the /stream handler only accepts 
> Streaming Expressions it makes sense to rename this parameter to "expr". 
> For example:
> http://localhost:8983/collection1/stream?expr=search(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8443) Change /stream handler http param from "stream" to "expr"

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064808#comment-15064808
 ] 

ASF subversion and git services commented on SOLR-8443:
---

Commit 1720849 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1720849 ]

SOLR-8443: Change /stream handler http param from stream to expr

> Change /stream handler http param from "stream" to "expr"
> -
>
> Key: SOLR-8443
> URL: https://issues.apache.org/jira/browse/SOLR-8443
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8443.patch
>
>
> When passing in a Streaming Expression to the /stream handler you currently 
> use the "stream" http parameter. This dates back to when serialized 
> TupleStream objects were passed in. Now that the /stream handler only accepts 
> Streaming Expressions it makes sense to rename this parameter to "expr". 
> For example:
> http://localhost:8983/collection1/stream?expr=search(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064802#comment-15064802
 ] 

Mark Miller commented on LUCENE-6933:
-

Sounds like David is saying the opposite - the other tools are following and 
--follow with git is not working.

[~dsmiley], is your git at least 1.5.3? I think that's where it was introduced 
on a quick google search.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log, tools.zip
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064797#comment-15064797
 ] 

Dawid Weiss commented on LUCENE-6933:
-

Oops, sorry. I misread your comment. I don't know. will look into it tomorrow.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log, tools.zip
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064795#comment-15064795
 ] 

Dawid Weiss commented on LUCENE-6933:
-

Look at comments above, David -- the tools probably don't "follow" renames. 
There should be an answer in those tools' docs how to fix this behavior, the 
history of renames is in the repo, for sure.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log, tools.zip
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064770#comment-15064770
 ] 

David Smiley commented on LUCENE-6933:
--

Thanks for all the hard work you put into this Dawid!

I was trying to test out how far the history goes back on the Solr side, using 
SearchComponent.java as an example.  I tried this:
{{git log --follow 
solr/core/src/java/org/apache/solr/handler/component/SearchComponent.java}} but 
it only goes back to 2012-04.  But when I use other tools I'm familiar with, 
Atlassian SourceTree, I found early commit messages with "SearchComponent" in 
them revealing commit 4a490cff561e9ab492ec27fdc55c51c0db02ffed in 2007-12.  Any 
ideas why git --log didn't work in this case?

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log, tools.zip
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15244 - Failure!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15244/
Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseParallelGC 
-XX:-CompactStrings

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SaslZkACLProviderTest

Error Message:
5 threads leaked from SUITE scope at 
org.apache.solr.cloud.SaslZkACLProviderTest: 1) Thread[id=9354, 
name=apacheds, state=WAITING, group=TGRP-SaslZkACLProviderTest] at 
java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:516) at 
java.util.TimerThread.mainLoop(Timer.java:526) at 
java.util.TimerThread.run(Timer.java:505)2) Thread[id=9358, 
name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)3) Thread[id=9356, 
name=kdcReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest] at 
jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)4) Thread[id=9355, 
name=ou=system.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest] 
at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)5) Thread[id=9357, 
name=groupCache.data, state=TIMED_WAITING, group=TGRP-SaslZkACLProviderTest]
 at jdk.internal.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:218) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2103)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1136)
 at 
java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:853)
 at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1083)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1143) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:632) 
at java.lang.Thread.run(Thread.java:747)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 5 threads leaked from SUITE 
scope at org.apache.solr.cloud.SaslZkACLProviderTest: 
   1) Thread[id=9354, name=apacheds, state=WAITING, 
group=TGRP-SaslZkACLProviderTest]
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:516)
at java.util.TimerThread.mainLoop(Timer.java:526)
at java.util.TimerThread.run(Timer.java:505)
   2) Thread[id=9358, name=changePwdReplayCache.data, state=TIMED_WAITING, 
group=TGRP-SaslZkACLProviderTest]

[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064736#comment-15064736
 ] 

Dawid Weiss commented on LUCENE-6933:
-

The exact number will depend slightly on the git version used (I had 1.x on one 
machine and 2.x on the other). I used simple estimates in the form of 
{code}
du -sh .git
{code}
on a clean clone.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log, tools.zip
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (64bit/jdk1.8.0_66) - Build # 5352 - Still Failing!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/5352/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([40A863EB2E411D3B:A9F2D8D3B0D88D93]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:749)
at 
org.apache.solr.core.TestArbitraryIndexDir.testLoadNewIndexDir(TestArbitraryIndexDir.java:107)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: REQUEST FAILED: xpath=*[count(//doc)=1]
xml response was: 

00


request was:q=id:2&qt=standard&start=0&rows=20&version=2.2
at org.apache.

[jira] [Commented] (LUCENE-6934) java.io.EOFException: read past EOF: MMapIndexInput [slice=_342.fdx]

2015-12-18 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064712#comment-15064712
 ] 

Michael McCandless commented on LUCENE-6934:


Hi, can you instead send an email to the Lucene user's list 
(java-u...@lucene.apache.org)?  This looks like index corruption, and there 
could be various causes (maybe including bugs that have been fixed since 4.2, 
which is quite old).

> java.io.EOFException: read past EOF: MMapIndexInput [slice=_342.fdx]
> 
>
> Key: LUCENE-6934
> URL: https://issues.apache.org/jira/browse/LUCENE-6934
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 4.2
>Reporter: Tejas Jethva
>
> We are getting following exception when we are trying to commit the changes 
> done on the index.
> java.io.EOFException: read past EOF: 
> MMapIndexInput(path="/_342.cfs") [slice=_342.fdx]
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readByte(ByteBufferIndexInput.java:78)
>   at org.apache.lucene.store.DataInput.readInt(DataInput.java:84)
>   at 
> org.apache.lucene.store.ByteBufferIndexInput.readInt(ByteBufferIndexInput.java:129)
>   at org.apache.lucene.codecs.CodecUtil.checkHeader(CodecUtil.java:126)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsReader.(CompressingStoredFieldsReader.java:102)
>   at 
> org.apache.lucene.codecs.compressing.CompressingStoredFieldsFormat.fieldsReader(CompressingStoredFieldsFormat.java:113)
>   at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:147)
>   at org.apache.lucene.index.SegmentReader.(SegmentReader.java:56)
>   at 
> org.apache.lucene.index.ReadersAndLiveDocs.getReader(ReadersAndLiveDocs.java:121)
>   at 
> org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:216)
>   at 
> org.apache.lucene.index.IndexWriter.applyAllDeletes(IndexWriter.java:2961)
>   at 
> org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:2952)
>   at 
> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2692)
>   at 
> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2827)
>   at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2807)
> This is the exception when we tried to close after commit failed:
> java.io.FileNotFoundException: /_342.cfs (No such file or 
> directory)
>   at java.io.RandomAccessFile.open(Native Method)
>   at java.io.RandomAccessFile.(RandomAccessFile.java:241)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:193)
>   at 
> org.apache.lucene.store.MMapDirectory.createSlicer(MMapDirectory.java:203)
>   at 
> org.apache.lucene.store.CompoundFileDirectory.(CompoundFileDirectory.java:102)
>   at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:116)
>   at org.apache.lucene.index.SegmentReader.(SegmentReader.java:56)
>   at 
> org.apache.lucene.index.ReadersAndLiveDocs.getReader(ReadersAndLiveDocs.java:121)
>   at 
> org.apache.lucene.index.BufferedDeletesStream.applyDeletes(BufferedDeletesStream.java:216)
>   at 
> org.apache.lucene.index.IndexWriter.applyAllDeletes(IndexWriter.java:2961)
>   at 
> org.apache.lucene.index.IndexWriter.maybeApplyDeletes(IndexWriter.java:2952)
>   at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:2925)
>   at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:2894)
>   at 
> org.apache.lucene.index.IndexWriter.closeInternal(IndexWriter.java:928)
>   at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:883)
>   at org.apache.lucene.index.IndexWriter.close(IndexWriter.java:845)
> Could you please point us what might be possible cause of this?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064700#comment-15064700
 ] 

Paul Elschot commented on LUCENE-6933:
--

git gui reports this:

Number of packed objects: 741540
Number of packs: 1
Disk space used by packed objects: 228602 KiB.

Sorry for the noise, the earlier counts include the working tree.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log, tools.zip
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Managed Resource Unit Test Failures

2015-12-18 Thread Michael Nilsson
I found what the problem was.  In this commit (
https://github.com/apache/lucene-solr/commit/5fde9e39e0d08398f1fc9988a03cb5932180c34a)
Noble
Paul removed support for the config managed endpoint (
https://issues.apache.org/jira/browse/SOLR-6476).  One of the commits
associated with this ticket mentioned "refactored bulk schema APIs and
other read REST APIs to use standard RequestHandler mechanism".

What do I need to do so that way I can properly hit my /config/test
endpoint?



On Thu, Dec 17, 2015 at 5:36 PM, Michael Nilsson 
wrote:

> I'm working on publishing a patch against trunk, adding a learning to rank
> contrib module.  For some reason, our unit tests that hit our config
> managed resources no longer seem to be recognizing the config/managed
> endpoint, but they were ok in 4.10.  I've pasted the code with the small
> test case below.  Anyone have an idea of why the ManagedResource doesn't
> seem to be registered?
>
> Essentially my test just assertJQ("/config/managed",
> "/responseHeader/status==0"), and its @BeforeClass init() sets everything
> up the exact same way that SolrRestletTestBase.java does, except putting a
> /config/* instead of /schema/*, and uses my solrconfig which has a
> searchComponent that registers the managed resource.  The managed resource
> is just a dummy, and the only function in the component that does something
> is the inform(SolrCore) method.
>
>
> Error:
> 
> HTTP ERROR: 404
> Problem accessing /solr/collection1/config/managed. Reason:
> *Can not find: /solr/collection1/config/managed*
> Powered by Jetty://
> 
>
>
>
> TestManaged.java:
> public class TestManaged extends RestTestBase {
>
>   @BeforeClass
>   public static void init() throws Exception {
> String solrconfig = "solrconfig-testend.xml";
> String schema = "schema-testend.xml";
>
> Path tempDir = createTempDir();
> Path coresDir = tempDir.resolve("cores");
>
> System.setProperty("coreRootDirectory", coresDir.toString());
> System.setProperty("configSetBaseDir", TEST_HOME());
>
> final SortedMap extraServlets = new TreeMap<>();
> final ServletHolder solrSchemaRestApi = new
> ServletHolder("SolrSchemaRestApi", ServerServlet.class);
> solrSchemaRestApi.setInitParameter("org.restlet.application",
> "org.apache.solr.rest.SolrSchemaRestApi");
> //extraServlets.put(solrSchemaRestApi, "/schema/*");  // '/schema/*'
> matches '/schema', '/schema/', and '/schema/whatever...'
> *extraServlets.put(solrSchemaRestApi, "/config/*");*  // '/schema/*'
> matches '/schema', '/schema/', and '/schema/whatever...'
>
> Properties props = new Properties();
> props.setProperty("name", DEFAULT_TEST_CORENAME);
> *props.setProperty("config", solrconfig);*
> *props.setProperty("schema", schema);*
> props.setProperty("configSet", "collection1");
>
> writeCoreProperties(coresDir.resolve("core"), props,
> "SolrRestletTestBase");
> createJettyAndHarness(TEST_HOME(),* solrconfig, schema, *"/solr",
> true, extraServlets);
>   }
>
>
>   @Test
>   public void testRestManagerEndpoints() throws Exception {
> String request = "/config/managed";
> *assertJQ(request, "/responseHeader/status==0");*
>   }
>
> }
>
>
>
>
> solrconfig-test.xml:
> (Contains the SolrCoreAware searchComponent that registers the resource)
>
> ...
> *   class="org.apache.solr.testmanaged.ManagedComponent"/>*
>
>   
>   
> 
>   json
>   id
> 
> 
>   managedComponent
> 
>   
> ...
>
>
>
> ManagedComponent.java:
> public class ManagedComponent extends SearchComponent implements
> SolrCoreAware {
>
>   public void inform(SolrCore core) {
> *core.getRestManager().addManagedResource("/config/test",
> ManagedStore.class);*
>   }
>
>   public void prepare(ResponseBuilder rb) throws IOException {}
>   public void process(ResponseBuilder rb) throws IOException {}
>   public String getDescription() { return null; }
> }
>
>
>
> ManagedStore.java:  (It is just a dummy class for the test)
> public class ManagedStore extends ManagedResource implements
> ManagedResource.ChildResourceSupport {
>
>   public ManagedStore(String resourceId, SolrResourceLoader loader,
> StorageIO storageIO) throws SolrException {
> super(resourceId, loader, storageIO);
>   }
>
>   protected void onManagedDataLoadedFromStorage(NamedList
> managedInitArgs, Object managedData) throws SolrException { }
>
>   public Object applyUpdatesToManagedData(Object updates) { return "HELLO
> UPDATES"; }
>
>   public void doDeleteChild(BaseSolrResource endpoint, String childId) { }
>
>   public void doGet(BaseSolrResource endpoint, String childId) {
> SolrQueryResponse response = endpoint.getSolrResponse();
> response.add("TEST", "HELLO GET");
>   }
>
> }
>
>
>


[jira] [Updated] (SOLR-8443) Change /stream handler http param from "stream" to "expr"

2015-12-18 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8443:
-
Attachment: SOLR-8443.patch

Patch with all streaming tests passing

> Change /stream handler http param from "stream" to "expr"
> -
>
> Key: SOLR-8443
> URL: https://issues.apache.org/jira/browse/SOLR-8443
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
> Attachments: SOLR-8443.patch
>
>
> When passing in a Streaming Expression to the /stream handler you currently 
> use the "stream" http parameter. This dates back to when serialized 
> TupleStream objects were passed in. Now that the /stream handler only accepts 
> Streaming Expressions it makes sense to rename this parameter to "expr". 
> For example:
> http://localhost:8983/collection1/stream?expr=search(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3526) Remove classfile dependency on ZooKeeper from CoreContainer

2015-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064657#comment-15064657
 ] 

Mark Miller commented on SOLR-3526:
---

I'm not sure we want to deal with those kinds of restrictions. Solr has many 
dependencies and ZK is not a particularly large one. I don't see why it should 
get special attention or special rules.

> Remove classfile dependency on ZooKeeper from CoreContainer
> ---
>
> Key: SOLR-3526
> URL: https://issues.apache.org/jira/browse/SOLR-3526
> Project: Solr
>  Issue Type: Wish
>  Components: SolrCloud
>Affects Versions: 4.0-ALPHA
>Reporter: Michael Froh
>
> We are using Solr as a library embedded within an existing application, and 
> are currently developing toward using 4.0 when it is released.
> We are currently instantiating SolrCores with null CoreDescriptors (and hence 
> no CoreContainer), since we don't need SolrCloud functionality (and do not 
> want to depend on ZooKeeper).
> A couple of months ago, SearchHandler was modified to try to retrieve a 
> ShardHandlerFactory from the CoreContainer. I was able to work around this by 
> specifying a dummy ShardHandlerFactory in the config.
> Now UpdateRequestProcessorChain is inserting a DistributedUpdateProcessor 
> into my chains, again triggering a NPE when trying to dereference the 
> CoreDescriptor.
> I would happily place the SolrCores in CoreContainers, except that 
> CoreContainer imports and references org.apache.zookeeper.KeeperException, 
> which we do not have (and do not want) in our classpath. Therefore, I get a 
> ClassNotFoundException when loading the CoreContainer class.
> Ideally (IMHO), ZkController should isolate the ZooKeeper dependency, and 
> simply rethrow KeeperExceptions as 
> org.apache.solr.common.cloud.ZooKeeperException (or some Solr-hosted checked 
> exception). Then CoreContainer could remove the offending import/references.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064609#comment-15064609
 ] 

Paul Elschot commented on LUCENE-6933:
--

I cloned from  https://github.com/dweiss/lucene-solr-svn2git.git, and it works 
as advertised.
After a git gc, the total file size is:

find . -type f -print0 | xargs -0 cat | wc
2942604 13472825 347467457

This is just under 350MB, which does not seem to be consistent with the 214MB 
that was mentioned above. Did I do something wrong?

To me the actual size is not a problem at all.

For reference, the total number of files in the local git repo is 9322:
find . -type f | wc
93229324  694864

And thanks for showing how and when to graft.


> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log, tools.zip
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8445) fix line separator in log4j.properties files

2015-12-18 Thread Ahmet Arslan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ahmet Arslan updated SOLR-8445:
---
Attachment: SOLR-8445.patch

> fix line separator in log4j.properties files
> 
>
> Key: SOLR-8445
> URL: https://issues.apache.org/jira/browse/SOLR-8445
> Project: Solr
>  Issue Type: Bug
>  Components: Server
>Affects Versions: 5.4, Trunk
>Reporter: Ahmet Arslan
>Priority: Trivial
> Attachments: SOLR-8445.patch
>
>
> new line is mistyped in conversion pattern 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8412) SchemaManager should synchronize on performOperations method

2015-12-18 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064604#comment-15064604
 ] 

Varun Thacker commented on SOLR-8412:
-

Hi Yonik,

{quote}
Reviewing the existing code some, I see this:
- SchemaManager.performOperations() calls doOperations() protected by 
schemaUpdateLock
- this performs a list of operations on the latest ManagedIndexSchema object, 
which may be created fresh, but will be   passed the same schemaUpdateLock
- these operations can call things like addFields()
{quote}

Yes this is what my current understand is as well. 

Here is what I additionally gathered. The motivation behind 
AddSchemaFieldsUpdateProcessor and SchemaManager are different. 
AddSchemaFieldsUpdateProcessor adds one field at a time. So the locking there 
is fine.

In SchemaManager we allow bulk operations and want to either perform all the 
operations or none. 
Now in cloud mode - 
SchemaManager#performOperations calls SchemaManager#doOperation which grabs the 
latest schema from Zk and performs all the operations on it. Once that it done 
it finally tries to save the managed-schema file by calling 
ZkController.persistConfigResourceToZooKeeper . 
ZkController.persistConfigResourceToZooKeeper will fail if the zkVersion 
provided is stale . In that case SchemaManager keeps retrying by fetching the 
latest schema from ZK and retrying the entire bulk operation . It does that 
till success or timeout .
Thus I feel we don't need any synchronization at all here. Either the current 
{[schema.getSchemaUpdateLock()}} or the synchronization on 
SchemaManager#performOperations in the new patch. 
ZkController.persistConfigResourceToZooKeeper takes care of it.

Now in standalone mode synchronizing on either the current way or the with 
making SchemaManager#performOperations synchronized has the same effect?

bq. Bottom line: the synchronization in the current code is complex enough that 
I don't know if the proposed simplifications are safe or not. If you could add 
some explanation around that, it would be great.

Point taken. On the simplification part here is why I felt the patch simplifies 
things - When I first looked at the code I was confused as to how can we take a 
lock on a schema object and then fetch the latest schema from ZK and do 
operations on that . But later I understood the reasoning . So maybe the patch 
simplifies that aspect?

On the correctness part , does my explanation here help? Or you still think 
this is not the right way to go? If thats the case I could add comments there 
to make the code easier to read and fix the remaining issues I found in another 
patch?

> SchemaManager should synchronize on performOperations method
> 
>
> Key: SOLR-8412
> URL: https://issues.apache.org/jira/browse/SOLR-8412
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Priority: Minor
> Attachments: SOLR-8412.patch, SOLR-8412.patch, SOLR-8412.patch
>
>
> Currently SchemaManager synchronizes on {{schema.getSchemaUpdateLock()}} . We 
> should synzhronize on {{performOperations}} instead. 
> The net affect will be the same but the code will be more clear. 
> {{schema.getSchemaUpdateLock()}} is used when you want to edit a schema and 
> add one field at a time. But the way SchemaManager works is that it does bulk 
> operations i.e performs all operations and then persists the final schema . 
> If there were two concurrent operations that took place, the later operation 
> will retry by fetching the latest schema .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_66) - Build # 5482 - Still Failing!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/5482/
Java: 64bit/jdk1.8.0_66 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, MDCAwareThreadPoolExecutor, SolrCore, 
MockDirectoryWrapper]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 4 object(s) that were not 
released!!! [MockDirectoryWrapper, MDCAwareThreadPoolExecutor, SolrCore, 
MockDirectoryWrapper]
at __randomizedtesting.SeedInfo.seed([BA8FD9090776008F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:228)
at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestLazyCores

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.core.TestLazyCores: 1) 
Thread[id=13351, name=searcherExecutor-6094-thread-1, state=WAITING, 
group=TGRP-TestLazyCores] at sun.misc.Unsafe.park(Native Method)
 at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.core.TestLazyCores: 
   1) Thread[id=13351, name=searcherExecutor-6094-thread-1, state=WAITING, 
group=TGRP-TestLazyCores]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at __randomizedt

[jira] [Created] (SOLR-8445) fix line separator in log4j.properties files

2015-12-18 Thread Ahmet Arslan (JIRA)
Ahmet Arslan created SOLR-8445:
--

 Summary: fix line separator in log4j.properties files
 Key: SOLR-8445
 URL: https://issues.apache.org/jira/browse/SOLR-8445
 Project: Solr
  Issue Type: Bug
  Components: Server
Affects Versions: 5.4, Trunk
Reporter: Ahmet Arslan
Priority: Trivial


new line is mistyped in conversion pattern 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-6933:

Attachment: tools.zip

Some tools used during the migration process (customized bfg).

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log, tools.zip
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064566#comment-15064566
 ] 

Dawid Weiss commented on LUCENE-6933:
-

I'd keep those resources at least in the releases made in the past 12 months or 
so. It should still truncate nicely. You can play with it yourself if you wish, 
the instructions are attached to the issue. I'll attach the custom tool too.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3526) Remove classfile dependency on ZooKeeper from CoreContainer

2015-12-18 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-3526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064558#comment-15064558
 ] 

Tomás Fernández Löbbe commented on SOLR-3526:
-

I think the change makes sense, it's a valid concern that people using 
EmbeddedSolrServer doesn't want ZooKeeper dependencies. 
I'm wondering if there is an easy way to test this so that dependencies are not 
inadvertently added later.

> Remove classfile dependency on ZooKeeper from CoreContainer
> ---
>
> Key: SOLR-3526
> URL: https://issues.apache.org/jira/browse/SOLR-3526
> Project: Solr
>  Issue Type: Wish
>  Components: SolrCloud
>Affects Versions: 4.0-ALPHA
>Reporter: Michael Froh
>
> We are using Solr as a library embedded within an existing application, and 
> are currently developing toward using 4.0 when it is released.
> We are currently instantiating SolrCores with null CoreDescriptors (and hence 
> no CoreContainer), since we don't need SolrCloud functionality (and do not 
> want to depend on ZooKeeper).
> A couple of months ago, SearchHandler was modified to try to retrieve a 
> ShardHandlerFactory from the CoreContainer. I was able to work around this by 
> specifying a dummy ShardHandlerFactory in the config.
> Now UpdateRequestProcessorChain is inserting a DistributedUpdateProcessor 
> into my chains, again triggering a NPE when trying to dereference the 
> CoreDescriptor.
> I would happily place the SolrCores in CoreContainers, except that 
> CoreContainer imports and references org.apache.zookeeper.KeeperException, 
> which we do not have (and do not want) in our classpath. Therefore, I get a 
> ClassNotFoundException when loading the CoreContainer class.
> Ideally (IMHO), ZkController should isolate the ZooKeeper dependency, and 
> simply rethrow KeeperExceptions as 
> org.apache.solr.common.cloud.ZooKeeperException (or some Solr-hosted checked 
> exception). Then CoreContainer could remove the offending import/references.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8279) Add a new test fault injection approach and a new SolrCloud test that stops and starts the cluster while indexing data and with random faults.

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064487#comment-15064487
 ] 

ASF subversion and git services commented on SOLR-8279:
---

Commit 1720841 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1720841 ]

SOLR-8279: One of two tests was not calling TestInjection#clear after using it. 
Call clear in the Solr base test class instead.

> Add a new test fault injection approach and a new SolrCloud test that stops 
> and starts the cluster while indexing data and with random faults.
> --
>
> Key: SOLR-8279
> URL: https://issues.apache.org/jira/browse/SOLR-8279
> Project: Solr
>  Issue Type: Test
>Reporter: Mark Miller
>Assignee: Mark Miller
> Attachments: SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch, 
> SOLR-8279.patch, SOLR-8279.patch, SOLR-8279.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8317) add responseHeader and response accessors to SolrQueryResponse

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064437#comment-15064437
 ] 

ASF subversion and git services commented on SOLR-8317:
---

Commit 1720838 from [~cpoerschke] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720838 ]

SOLR-8317: add responseHeader and response accessors to SolrQueryResponse. 
TestSolrQueryResponse tests for accessors. (merge in revision 1720822 from 
trunk)

> add responseHeader and response accessors to SolrQueryResponse
> --
>
> Key: SOLR-8317
> URL: https://issues.apache.org/jira/browse/SOLR-8317
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8317-part1of2.patch, SOLR-8317.patch
>
>
> To make code easier to understand and modify. Proposed patch against trunk to 
> follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6936) TestDimensionalRangeQuery failures: AIOOBE while merging

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064416#comment-15064416
 ] 

ASF subversion and git services commented on LUCENE-6936:
-

Commit 1720837 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1720837 ]

LUCENE-6936: duh, get dimensional value merging working again

> TestDimensionalRangeQuery failures: AIOOBE while merging 
> -
>
> Key: LUCENE-6936
> URL: https://issues.apache.org/jira/browse/LUCENE-6936
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: Trunk
>Reporter: Steve Rowe
>Assignee: Michael McCandless
> Fix For: Trunk
>
>
> From [http://jenkins.sarowe.net/job/Lucene-Solr-Nightly-trunk/105/] - neither 
> failure reproduced for me on the same box:
> {noformat}
>[junit4] Suite: org.apache.lucene.search.TestDimensionalRangeQuery
>[junit4]   2> NOTE: download the large Jenkins line-docs file by running 
> 'ant get-jenkins-line-docs' in the lucene directory.
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestDimensionalRangeQuery -Dtests.method=testRandomLongsBig 
> -Dtests.seed=BEF1D45ADA12B09B -Dtests.multiplier=2 -Dtests.nightly=true 
> -Dtests.slow=true 
> -Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
> -Dtests.locale=cs_CZ -Dtests.timezone=Africa/Porto-Novo -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   43.4s J5  | TestDimensionalRangeQuery.testRandomLongsBig 
> <<<
>[junit4]> Throwable #1: 
> org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([BEF1D45ADA12B09B:95C7B6D701973443]:0)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:714)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:728)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1459)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1242)
>[junit4]>at 
> org.apache.lucene.index.RandomIndexWriter.addDocument(RandomIndexWriter.java:170)
>[junit4]>at 
> org.apache.lucene.search.TestDimensionalRangeQuery.verifyLongs(TestDimensionalRangeQuery.java:208)
>[junit4]>at 
> org.apache.lucene.search.TestDimensionalRangeQuery.doTestRandomLongs(TestDimensionalRangeQuery.java:147)
>[junit4]>at 
> org.apache.lucene.search.TestDimensionalRangeQuery.testRandomLongsBig(TestDimensionalRangeQuery.java:114)
>[junit4]>at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: java.lang.ArrayIndexOutOfBoundsException: 1024
>[junit4]>at 
> org.apache.lucene.util.bkd.BKDWriter$MergeReader.next(BKDWriter.java:279)
>[junit4]>at 
> org.apache.lucene.util.bkd.BKDWriter.merge(BKDWriter.java:413)
>[junit4]>at 
> org.apache.lucene.codecs.lucene60.Lucene60DimensionalWriter.merge(Lucene60DimensionalWriter.java:159)
>[junit4]>at 
> org.apache.lucene.index.SegmentMerger.mergeDimensionalValues(SegmentMerger.java:168)
>[junit4]>at 
> org.apache.lucene.index.SegmentMerger.merge(SegmentMerger.java:117)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4062)
>[junit4]>at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3642)
>[junit4]>at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>[junit4]>at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> /var/lib/jenkins/jobs/Lucene-Solr-Nightly-trunk/workspace/lucene/build/core/test/J5/temp/lucene.search.TestDimensionalRangeQuery_BEF1D45ADA12B09B-001
>[junit4]   2> Dec 15, 2015 11:03:38 PM 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>[junit4]   2> WARNING: Uncaught exception in thread: Thread[Lucene Merge 
> Thread #634,5,TGRP-TestDimensionalRangeQuery]
>[junit4]   2> org.apache.lucene.index.MergePolicy$MergeException: 
> java.lang.ArrayIndexOutOfBoundsException: 1024
>[junit4]   2>at 
> __randomizedtesting.SeedInfo.seed([BEF1D45ADA12B09B]:0)
>[junit4]   2>at 
> org.apache.lucene.index.ConcurrentMergeScheduler.handleMergeException(ConcurrentMergeScheduler.java:668)
>[junit4]   2>at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:648)
>[j

[JENKINS] Lucene-Solr-5.x-Solaris (multiarch/jdk1.7.0) - Build # 267 - Failure!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Solaris/267/
Java: multiarch/jdk1.7.0 -d64 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest

Error Message:
There are still nodes recoverying - waited for 10 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 10 
seconds
at 
__randomizedtesting.SeedInfo.seed([FF77F5420062EF23:B1D4809111B9FE33]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:175)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:837)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1393)
at 
org.apache.solr.security.TestAuthorizationFramework.authorizationFrameworkTest(TestAuthorizationFramework.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:965)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:940)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailur

[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064369#comment-15064369
 ] 

Mark Miller commented on LUCENE-6933:
-

bq. i installed the chrome extension

For cmd line, if you have Git 2.6 or above, you are supposed to be able to make 
--follow the default when it makes sense with something like git config 
--global log.follow "true"




> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2015-12-18 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated SOLR-7865:
-
Fix Version/s: (was: 5.x)
   5.5

> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Michael McCandless
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2015-12-18 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved SOLR-7865.
--
   Resolution: Fixed
Fix Version/s: Trunk
   5.x

> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Michael McCandless
> Fix For: 5.x, Trunk
>
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064368#comment-15064368
 ] 

ASF subversion and git services commented on SOLR-7865:
---

Commit 1720832 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1720832 ]

SOLR-7865: BlendedInfixSuggester was returning more results than requested

> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Michael McCandless
> Fix For: 5.5, Trunk
>
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064362#comment-15064362
 ] 

ASF subversion and git services commented on SOLR-7865:
---

Commit 1720831 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720831 ]

SOLR-7865: BlendedInfixSuggester was returning more results than requested

> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Michael McCandless
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064360#comment-15064360
 ] 

Mark Miller commented on LUCENE-6933:
-

bq. I can go down to git repo size of 160mb

Call me silly, but I'm +1 on that. Same reason as for the jars - if you want 
those files, they are in SVN and that is the best place to try and deal with 
that level. The Git repo should just try to capture all the code / build 
history it can.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064357#comment-15064357
 ] 

Dawid Weiss commented on LUCENE-6937:
-

Oh, ok!

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated LUCENE-6933:

Attachment: migration.txt

SVN-git merging procedure (outline). For historical reference.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064353#comment-15064353
 ] 

Dawid Weiss commented on LUCENE-6933:
-

I can go down to git repo size of 160mb by removing any of these files (not 
currently used on any of the active branches):
{code}
*.mem
*.dat
*.war
*.zip
{code}
These are mostly precompiled automata, etc. Current blobs (in any of branch_x 
and master) are not affected, but tags are. Don't know if it makes sense.


> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: migration.txt, multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064347#comment-15064347
 ] 

Uwe Schindler commented on LUCENE-6937:
---

The messaage is there: https://github.com/apache/solr/tree/trunk

The problem is that github opens the old 1.1 release branch because there is no 
"master". And "1.1" is the first in alphabetical order.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8420) Date statistics: sumOfSquares overflows long

2015-12-18 Thread Tom Hill (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Hill updated SOLR-8420:
---
Attachment: 0001-Fix-overflow-in-date-statistics.patch

Fixes overflow in stddev, too.

Not ready to commit. I still have to fix a rounding error in TestDistributed.

> Date statistics: sumOfSquares overflows long
> 
>
> Key: SOLR-8420
> URL: https://issues.apache.org/jira/browse/SOLR-8420
> Project: Solr
>  Issue Type: Bug
>  Components: SearchComponents - other
>Affects Versions: 5.4
>Reporter: Tom Hill
>Priority: Minor
> Attachments: 0001-Fix-overflow-in-date-statistics.patch, 
> 0001-Fix-overflow-in-date-statistics.patch
>
>
> The values for Dates are large enough that squaring them overflows a "long" 
> field. This should be converted to a double. 
> StatsValuesFactory.java, line 755 DateStatsValues#updateTypeSpecificStats Add 
> a cast to double 
> sumOfSquares += ( (double)value * value * count);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064338#comment-15064338
 ] 

Dawid Weiss commented on LUCENE-6937:
-

Well, you could just commit a similar message to solr's old repo folder -- if 
it gets synced up it'd show the same message on github. But honestly, I don't 
think it's worth it (I'd just ask github to close the mirror of these two old 
branches).

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064334#comment-15064334
 ] 

Uwe Schindler commented on LUCENE-6937:
---

bq.  they're just confusing to people, most likely.

especially because in Solr's case, if you go to https://github.com/apache/solr, 
it opens the 1.1 release branch (because it is first one). So people get more 
confused. In case of Lucene it goes to trunk, which already has the "repo 
moved" message. In contrast, Lucene is correct: https://github.com/apache/lucene

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064319#comment-15064319
 ] 

Uwe Schindler edited comment on LUCENE-6937 at 12/18/15 5:53 PM:
-

bq. This will complicate github mirror integration as there are existing forks 
of it already, etc. My opinion is that we should replace it because it's not a 
complete mirror anyway.

+1. This only causes issues for people that have forks or checkouts.

What should we do with https://github.com/apache/solr/tree/trunk and 
https://github.com/apache/lucene/tree/trunk ?

Those are the old pre-lusolr-merge SVN repos. So basically, Dawid does not need 
to clone them anyways, we can leave what exists there. It looks like it is 
complete. Although the trunk branch should be renamed to "master" (or github's 
config changed), because currently it shows the wrong ones if you go to repo's 
homepage (in case of solr it shows version 1.1, because this is the 
alphabetically first branch, for lucene its interestingly correct).


was (Author: thetaphi):
bq. This will complicate github mirror integration as there are existing forks 
of it already, etc. My opinion is that we should replace it because it's not a 
complete mirror anyway.

+1. This only causes issues for people that have forks or checkouts.

What should we do with https://github.com/apache/solr/tree/trunk and 
https://github.com/apache/lucene/tree/trunk ?

Those are the old pre-lusolr-merge SVN repos. So basically, Dawid does not need 
to clone them anyways, we can leave what exists there. It looks like it is 
complete. Although the trunk branch should be renamed to "master" (or github's 
config changed), because currently it shows the wrong ones if you go to repo's 
homepage.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064326#comment-15064326
 ] 

Dawid Weiss commented on LUCENE-6937:
-

These are obsolete repos. Frankly, I'd just remove them, they're just confusing 
to people, most likely.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-8436) Realtime-get should support filters

2015-12-18 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8436?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-8436:
--

Assignee: Yonik Seeley

> Realtime-get should support filters
> ---
>
> Key: SOLR-8436
> URL: https://issues.apache.org/jira/browse/SOLR-8436
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>
> RTG currently ignores filters.  There are probably other use-cases for RTG 
> and filters, but one that comes to mind is security filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8436) Realtime-get should support filters

2015-12-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8436?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064325#comment-15064325
 ] 

Yonik Seeley commented on SOLR-8436:


I can take a crack at adding this functionality...

> Realtime-get should support filters
> ---
>
> Key: SOLR-8436
> URL: https://issues.apache.org/jira/browse/SOLR-8436
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.4
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>
> RTG currently ignores filters.  There are probably other use-cases for RTG 
> and filters, but one that comes to mind is security filters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064319#comment-15064319
 ] 

Uwe Schindler commented on LUCENE-6937:
---

bq. This will complicate github mirror integration as there are existing forks 
of it already, etc. My opinion is that we should replace it because it's not a 
complete mirror anyway.

+1. This only causes issues for people that have forks or checkouts.

What should we do with https://github.com/apache/solr/tree/trunk and 
https://github.com/apache/lucene/tree/trunk ?

Those are the old pre-lusolr-merge SVN repos. So basically, Dawid does not need 
to clone them anyways, we can leave what exists there. It looks like it is 
complete. Although the trunk branch should be renamed to "master" (or github's 
config changed), because currently it shows the wrong ones if you go to repo's 
homepage.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8444) Combine facet telemetry information from shards

2015-12-18 Thread Michael Sun (JIRA)
Michael Sun created SOLR-8444:
-

 Summary: Combine facet telemetry information from shards
 Key: SOLR-8444
 URL: https://issues.apache.org/jira/browse/SOLR-8444
 Project: Solr
  Issue Type: Sub-task
Reporter: Michael Sun






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8443) Change /stream handler http param from "stream" to "expr"

2015-12-18 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8443:
-
Description: 
When passing in a Streaming Expression to the /stream handler you currently use 
the "stream" http parameter. This dates back to when serialized TupleStream 
objects were passed in. Now that the /stream handler only accepts Streaming 
Expressions it makes sense to rename this parameter to "expr". 

For example:

http://localhost:8983/collection1/stream?expr=search(...)



  was:
When passing in a Streaming Expression to the /stream handler you currently use 
the "stream" http parameter. This dates back to when serialized TupleStream 
objects were passed in. Now that the /stream handler only accepts Streaming 
Expressions it makes sense to rename this parameter to "expr". 

This syntax also helps to emphasize that Streaming Expressions are a function 
language.

For example:

http://localhost:8983/collection1/stream?expr=search(...)




> Change /stream handler http param from "stream" to "expr"
> -
>
> Key: SOLR-8443
> URL: https://issues.apache.org/jira/browse/SOLR-8443
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> When passing in a Streaming Expression to the /stream handler you currently 
> use the "stream" http parameter. This dates back to when serialized 
> TupleStream objects were passed in. Now that the /stream handler only accepts 
> Streaming Expressions it makes sense to rename this parameter to "expr". 
> For example:
> http://localhost:8983/collection1/stream?expr=search(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8443) Change /stream handler http param from "stream" to "expr"

2015-12-18 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8443:
-
Description: 
When passing in a Streaming Expression to the /stream handler you currently use 
the "stream" http parameter. This dates back to when serialized TupleStream 
objects were passed in. Now that the /stream handler only accepts Streaming 
Expressions it makes sense to rename this parameter to "expr". 

This syntax also helps to emphasize that Streaming Expressions are a function 
language.

For example:

http://localhost:8983/collection1/stream?expr=search(...)



  was:
When passing in a Streaming Expression to the /stream handler you currently use 
the "stream" http parameter. This dates back to when serialized TupleStream 
objects were passed in. Now that the /stream handler only accepts Streaming 
Expressions it makes sense to rename this parameter to "func". 

This syntax also helps to emphasize that Streaming Expressions are a function 
language.

For example:

http://localhost:8983/collection1/stream?func=search(...)




> Change /stream handler http param from "stream" to "expr"
> -
>
> Key: SOLR-8443
> URL: https://issues.apache.org/jira/browse/SOLR-8443
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> When passing in a Streaming Expression to the /stream handler you currently 
> use the "stream" http parameter. This dates back to when serialized 
> TupleStream objects were passed in. Now that the /stream handler only accepts 
> Streaming Expressions it makes sense to rename this parameter to "expr". 
> This syntax also helps to emphasize that Streaming Expressions are a function 
> language.
> For example:
> http://localhost:8983/collection1/stream?expr=search(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8443) Change /stream handler http param from "stream" to "func"

2015-12-18 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064310#comment-15064310
 ] 

Joel Bernstein commented on SOLR-8443:
--

I like it. I'll update the ticket.

> Change /stream handler http param from "stream" to "func"
> -
>
> Key: SOLR-8443
> URL: https://issues.apache.org/jira/browse/SOLR-8443
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> When passing in a Streaming Expression to the /stream handler you currently 
> use the "stream" http parameter. This dates back to when serialized 
> TupleStream objects were passed in. Now that the /stream handler only accepts 
> Streaming Expressions it makes sense to rename this parameter to "func". 
> This syntax also helps to emphasize that Streaming Expressions are a function 
> language.
> For example:
> http://localhost:8983/collection1/stream?func=search(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8443) Change /stream handler http param from "stream" to "expr"

2015-12-18 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-8443:
-
Summary: Change /stream handler http param from "stream" to "expr"  (was: 
Change /stream handler http param from "stream" to "func")

> Change /stream handler http param from "stream" to "expr"
> -
>
> Key: SOLR-8443
> URL: https://issues.apache.org/jira/browse/SOLR-8443
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> When passing in a Streaming Expression to the /stream handler you currently 
> use the "stream" http parameter. This dates back to when serialized 
> TupleStream objects were passed in. Now that the /stream handler only accepts 
> Streaming Expressions it makes sense to rename this parameter to "func". 
> This syntax also helps to emphasize that Streaming Expressions are a function 
> language.
> For example:
> http://localhost:8983/collection1/stream?func=search(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8412) SchemaManager should synchronize on performOperations method

2015-12-18 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064306#comment-15064306
 ] 

Yonik Seeley commented on SOLR-8412:


bq. We should synzhronize on performOperations instead. The net affect will be 
the same but the code will be more clear.

Changing complex synchronization causes warning bells to go off.
Are you sure that the net effect is the same?  I'm not familiar with this part 
of the code, so hopefully someone else who is can chime in... but at first 
blush it definitely doesn't look safe.
This patch changes the locking from using schemaUpdateLock (which is shared 
among multiple objects) to using either schemaUpdateLock or the current objects 
monitor.  It's certainly not simpler or clearer to try and figure out if things 
are still thread safe.

Reviewing the existing code some, I see this:
- SchemaManager.performOperations() calls doOperations() protected by 
schemaUpdateLock
  - this performs a list of operations on the latest ManagedIndexSchema object, 
which *may* be created fresh, but will be passed the same schemaUpdateLock
  - these operations can call things like addFields()

AddSchemaFieldsUpdateProcessor has this:
{code}
// Need to hold the lock during the entire attempt to ensure that
// the schema on the request is the latest
synchronized (oldSchema.getSchemaUpdateLock()) {
  try {
IndexSchema newSchema = oldSchema.addFields(newFields);
{code}
But with the patch, we're locking on a different object, so what the comment 
asserts it is trying to do may be broken?
Actually, it's not at all clear to me why even in the current code, we don't 
need to grab the latest schema again *after* we lock the update lock.

Moving on, to addFields(), it looks like it can (with the patch) now be called 
on the same object with two different locks held.  Or even on different objects 
it's not clear if it's now safe.

Bottom line: the synchronization in the current code is complex enough that I 
don't know if the proposed simplifications are safe or not.  If you could add 
some explanation around that, it would be great.


> SchemaManager should synchronize on performOperations method
> 
>
> Key: SOLR-8412
> URL: https://issues.apache.org/jira/browse/SOLR-8412
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Priority: Minor
> Attachments: SOLR-8412.patch, SOLR-8412.patch, SOLR-8412.patch
>
>
> Currently SchemaManager synchronizes on {{schema.getSchemaUpdateLock()}} . We 
> should synzhronize on {{performOperations}} instead. 
> The net affect will be the same but the code will be more clear. 
> {{schema.getSchemaUpdateLock()}} is used when you want to edit a schema and 
> add one field at a time. But the way SchemaManager works is that it does bulk 
> operations i.e performs all operations and then persists the final schema . 
> If there were two concurrent operations that took place, the later operation 
> will retry by fetching the latest schema .



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064301#comment-15064301
 ] 

Robert Muir commented on LUCENE-6933:
-

Thanks Dawid, i installed the chrome extension 
(https://chrome.google.com/webstore/detail/github-follow/agalokjhnhheienloigiaoohgmjdpned/)
 which seems to work.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064294#comment-15064294
 ] 

Dawid Weiss edited comment on LUCENE-6933 at 12/18/15 5:39 PM:
---

git log (and github) doesn't display log history past rename. 

http://stackoverflow.com/questions/5646174/github-follow-history-by-default

Try this though:
{code}
git log --follow lucene\core\src\java\org\apache\lucene\index\IndexWriter.java
{code}
Shows the history all the way back to 2001.


was (Author: dweiss):
git log (and github) doesn't display log history past rename. Try this though:
{code}
git log --follow lucene\core\src\java\org\apache\lucene\index\IndexWriter.java
{code}
Shows the history all the way back to 2001.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064297#comment-15064297
 ] 

Mark Miller commented on LUCENE-6937:
-

bq. The infras could retrace everything I did to ensure consistency with SVN, 
but I see little point in doing this (takes awful amount of time and some 
quirky knowledge).

I don't think they will be very interested in doing those things. Just getting 
our Git repo setup at Apache and our link to GitHub setup. They are really only 
going to be interested in touching the things we cannot most likely.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064294#comment-15064294
 ] 

Dawid Weiss commented on LUCENE-6933:
-

git log (and github) doesn't display log history past rename. Try this though:
{code}
git log --follow lucene\core\src\java\org\apache\lucene\index\IndexWriter.java
{code}
Shows the history all the way back to 2001.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064292#comment-15064292
 ] 

Mark Miller commented on LUCENE-6937:
-

Lot's of projects at Apache have already migrated, so other than how we clean 
up our svn-git migration, none of this will be new ground.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8443) Change /stream handler http param from "stream" to "func"

2015-12-18 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064290#comment-15064290
 ] 

Dennis Gove edited comment on SOLR-8443 at 12/18/15 5:35 PM:
-

If open to other suggestions, I find that I tend to refer to that parameter as 
the expression. Maybe expr=search().

My thinking here is that one is providing a (potentially complex) expression 
made up of function calls.


was (Author: dpgove):
If open to other suggestions, I find that I tend to refer to that parameter as 
the expression. Maybe expr=search()

> Change /stream handler http param from "stream" to "func"
> -
>
> Key: SOLR-8443
> URL: https://issues.apache.org/jira/browse/SOLR-8443
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> When passing in a Streaming Expression to the /stream handler you currently 
> use the "stream" http parameter. This dates back to when serialized 
> TupleStream objects were passed in. Now that the /stream handler only accepts 
> Streaming Expressions it makes sense to rename this parameter to "func". 
> This syntax also helps to emphasize that Streaming Expressions are a function 
> language.
> For example:
> http://localhost:8983/collection1/stream?func=search(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8443) Change /stream handler http param from "stream" to "func"

2015-12-18 Thread Dennis Gove (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064290#comment-15064290
 ] 

Dennis Gove commented on SOLR-8443:
---

If open to other suggestions, I find that I tend to refer to that parameter as 
the expression. Maybe expr=search()

> Change /stream handler http param from "stream" to "func"
> -
>
> Key: SOLR-8443
> URL: https://issues.apache.org/jira/browse/SOLR-8443
> Project: Solr
>  Issue Type: Bug
>  Components: SolrJ
>Reporter: Joel Bernstein
>Priority: Minor
>
> When passing in a Streaming Expression to the /stream handler you currently 
> use the "stream" http parameter. This dates back to when serialized 
> TupleStream objects were passed in. Now that the /stream handler only accepts 
> Streaming Expressions it makes sense to rename this parameter to "func". 
> This syntax also helps to emphasize that Streaming Expressions are a function 
> language.
> For example:
> http://localhost:8983/collection1/stream?func=search(...)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6937) Migrate Lucene project from SVN to Git.

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064287#comment-15064287
 ] 

Dawid Weiss commented on LUCENE-6937:
-

As for infra, technically this should be easy -- set up git repo, clone 
--mirror the one I uploaded to github... Legally -- I don't know. The infras 
could retrace everything I did to ensure consistency with SVN, but I see little 
point in doing this (takes awful amount of time and some quirky knowledge).

Also, don't know whether we can/ should just remove/ replace the existing git 
clone at:
git://git.apache.org/lucene-solr.git

This will complicate github mirror integration as there are existing forks of 
it already, etc. My opinion is that we should replace it because it's not a 
complete mirror anyway.

> Migrate Lucene project from SVN to Git.
> ---
>
> Key: LUCENE-6937
> URL: https://issues.apache.org/jira/browse/LUCENE-6937
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Mark Miller
>
> See mailing list discussion: 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201512.mbox/%3CCAL8PwkbFVT83ZbCZm0y-x-MDeTH6HYC_xYEjRev9fzzk5YXYmQ%40mail.gmail.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064270#comment-15064270
 ] 

Robert Muir commented on LUCENE-6933:
-

Is it still expected that there still a problem for lucene core/ history?

E.G. here is indexwriter: 
https://github.com/dweiss/lucene-solr-svn2git/commits/master/lucene/core/src/java/org/apache/lucene/index/IndexWriter.java?page=8


> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-18 Thread Mark Miller
I've filed https://issues.apache.org/jira/browse/LUCENE-6937 as a parent
issue to discuss and work through a migration.

I'm going to assume we are going to go ahead with this until someone steps
up and says otherwise. So far we seem to have consensus. In any case, that
JIRA is probably the best place to voice dissent.

With the complete Git repo, we still have to look at the build and any
other implications. Once that is done, we should probably open an INFRA
JIRA issue to start discussing what the INFRA team needs from us to
complete a migration.

- Mark

On Fri, Dec 18, 2015 at 12:05 PM Dawid Weiss  wrote:

>
> I've made some comments about the conversion process here:
>
> https://issues.apache.org/jira/browse/LUCENE-6933?focusedCommentId=15064208#comment-15064208
>
> Feel free to try it out.
> https://github.com/dweiss/lucene-solr-svn2git
>
> I don't know what the next steps are. This looks like a good starting
> point to switch over to git with all the development? The only thing I
> still plan on doing is getting rid of a few large binary blobs in
> historical resources, but even without it this seems acceptable size-wise
> (~200mb).
>
> Dawid
>
>
>
> On Thu, Dec 17, 2015 at 9:13 AM, Dawid Weiss 
> wrote:
>
>>
>> > The question I had (I am sure a very dumb one): WHY do we care about 
>> > history
>> preserved perfectly in Git?
>>
>> For me it's for sentimental, archival and task-challenge reasons.
>> Robert's requirement is that git praise/blame/log works and on a given file
>> and shows its true history of changes. Everyone has his own reasons I
>> guess. If the initial clone is small enough then I see no problem in
>> keeping the history if we can preserve it.
>>
>> Dawid
>>
>>
>>
>> On Thu, Dec 17, 2015 at 4:52 AM, david.w.smi...@gmail.com <
>> david.w.smi...@gmail.com> wrote:
>>
>>> +1 totally agree.  Any way; the bloat should largely be the binaries &
>>> unrelated projects, not code (small text files).
>>>
>>> On Wed, Dec 16, 2015 at 10:36 PM Doug Turnbull <
>>> dturnb...@opensourceconnections.com> wrote:
>>>
 In defense of more history immediately available--it is often far more
 useful to poke around code history/run blame to figure out some code than
 by taking it at face value. Putting this in a secondary place like
 Apache SVN repo IMO reduces the readability of the code itself. This is
 doubly true for new developers that won't know about Apache's SVN. And
 Lucene can be quite intricate code. Further in my own work poking around in
 github mirrors I frequently hit the current cutoff. Which is one reason I
 stopped using them for anything but the casual investigation.

 I'm not totally against a cutoff point, but I'd advocate for exhausting
 other options first, such as trimming out unrelated projects, binaries, 
 etc.

 -Doug


 On Wednesday, December 16, 2015, Shawn Heisey 
 wrote:

> On 12/16/2015 5:53 PM, Alexandre Rafalovitch wrote:
> > On 16 December 2015 at 00:44, Dawid Weiss 
> wrote:
> >> 4) The size of JARs is really not an issue. The entire SVN repo I
> mirrored
> >> locally (including empty interim commits to cater for
> svn:mergeinfos) is 4G.
> >> If you strip the stuff like javadocs and side projects (Nutch,
> Tika, Mahout)
> >> then I bet the entire history can fit in 1G total. Of course
> stripping JARs
> >> is also doable.
> > I think this answered one of the issues. So, this is not something
> to focus on.
> >
> > The question I had (I am sure a very dumb one): WHY do we care about
> > history preserved perfectly in Git? Because that seems to be the real
> > bottleneck now. Does anybody still checks out an intermediate commit
> > in Solr 1.4 branch?
>
> I do not think we need every bit of history -- at least in the primary
> read/write repository.  I wonder how much of a size difference there
> would be between tossing all history before 5.0 and tossing all history
> before the ivy transition was completed.
>
> In the interests of reducing the size and download time of a clone
> operation, I definitely think we should trim history in the main repo
> to
> some arbitrary point, as long as the full history is available
> elsewhere.  It's my understanding that it will remain in
> svn.apache.org
> (possibly forever), and I think we could also create "historical"
> read-only git repos.
>
> Almost every time I am working on the code, I only care about the
> stable
> branch and trunk.  Sometimes I will check out an older 4.x tag so I can
> see the exact code referenced by a stacktrace in a user's error
> message,
> but when this is required, I am willing to go to an entirely different
> repository and chew up bandwidth/disk resourcesto obtain it, and I do
> not care whether it is git or svn.  As time marches on, fewer people
> will have

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_66) - Build # 15242 - Still Failing!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15242/
Java: 64bit/jdk1.8.0_66 -XX:+UseCompressedOops -XX:+UseSerialGC

4 tests failed.
FAILED:  org.apache.lucene.search.TestDimensionalRangeQuery.testBasicSortedSet

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([59C2BC07FF545D87:3F3C105C63D72DF2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.lucene.search.TestDimensionalRangeQuery.testBasicSortedSet(TestDimensionalRangeQuery.java:774)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.lucene.search.TestDimensionalRangeQuery.testRandomBinaryMedium

Error Message:
Captured an uncaught exception in thread: Thread[id=1321, name=T0, 
state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=1321, name=T0, state=RUNNABLE, 
group=TGRP-TestDimensionalRangeQuery]
at 
__randomizedtesting.SeedInfo.seed([

Re: [JENKINS-EA] Lucene-Solr-trunk-Linux (64bit/jdk-9-ea+95) - Build # 15241 - Still Failing!

2015-12-18 Thread Michael McCandless
Woops, I'll fix ;)

Mike McCandless

http://blog.mikemccandless.com


On Fri, Dec 18, 2015 at 11:11 AM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/15241/
> Java: 64bit/jdk-9-ea+95 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC 
> -XX:-CompactStrings
>
> 3 tests failed.
> FAILED:  
> junit.framework.TestSuite.org.apache.lucene.search.TestDimensionalRangeQuery
>
> Error Message:
> The test or suite printed 11714 bytes to stdout and stderr, even though the 
> limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
> completely with @SuppressSysoutChecks or run with -Dtests.verbose=true
>
> Stack Trace:
> java.lang.AssertionError: The test or suite printed 11714 bytes to stdout and 
> stderr, even though the limit was set to 8192 bytes. Increase the limit with 
> @Limit, ignore it completely with @SuppressSysoutChecks or run with 
> -Dtests.verbose=true
> at __randomizedtesting.SeedInfo.seed([81A986CEC279455C]:0)
> at 
> org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:212)
> at 
> com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
> at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
> at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at java.lang.Thread.run(Thread.java:747)
>
>
> FAILED:  org.apache.lucene.search.TestDimensionalRangeQuery.testAllEqual
>
> Error Message:
> Captured an uncaught exception in thread: Thread[id=819, name=T2, 
> state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]
>
> Stack Trace:
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=819, name=T2, state=RUNNABLE, 
> group=TGRP-TestDimensionalRangeQuery]
> Caused by: java.lang.AssertionError: T2: iter=14 id=7 docID=6 
> value=4976449575468379731 (range: 4976449575468377891 TO 4976449575468380722) 
> expected true but got: false deleted?=false 
> query=DimensionalRangeQuery:field=sn_value:[[[B@2611ec8e] TO [[B@6e5c1692]]
> at __randomizedtesting.SeedInfo.seed([81A986CEC279455C]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at 
> org.apache.lucene.search.TestDimensionalRangeQuery$1._run(TestDimensionalRangeQuery.java:357)
> at 
> org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:265)
>
>
> FAILED:  
> org.apache.lucene.search.TestDimensionalRangeQuery.testRandomLongsMedium
>
> Error Message:
> Captured an uncaught exception in thread: Thread[id=829, name=T2, 
> state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]
>
> Stack Trace:
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=829, name=T2, state=RUNNABLE, 
> group=TGRP-TestDimensionalRangeQuery]
> Caused by: java.lang.AssertionError: T2: iter=0 id=882 docID=0 
> value=4976449575468422113 (range: 4976449575468396109 TO 4976449575468445494) 
> expected true but got: false deleted?=false 
> query=DimensionalRangeQuery:field=ss_value:[[[B@48d97645] TO [[B@4ce6eb86]]
> at __randomizedtesting.SeedInfo.seed([81A986CEC279455C]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at 
> org.apache.lucene.search.TestDimensionalRangeQuery$1._run(TestDimensionalRangeQuery.java:357)
> at 
> org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:265)
>
>
>
>
> Build Log:
> [...truncated 1393 lines...]
>[junit4] Suite: org.apache.lucene.search.TestDimensionalRangeQuery
>[junit4]   2> Dee 18, 2015 6:10:19 ALUULA 
> com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
>  uncaughtException
>[junit4]   2> WARNING: Uncaught exception in thread: 
> Thread[T2,5,TGRP-TestDimensionalRangeQuery]
>[junit4]   2> java.lang.AssertionError: T2: iter=14 id=7 docID=6 
> value=4976449575468379731 (range: 4976449575468377891 TO 4976449575468380722) 
> expected true but got: false deleted?=false 
> query=DimensionalRangeQuery:field=sn_value:[[[B@2611ec8e] TO [[B@6e5c1692]]
>[junit4]   2>at 
> __randomizedtesting.SeedInfo.seed([81A986CEC279455C]:0)
>[junit4]   2>at org.junit.Assert.fail(Assert.java:93)
>[junit4]   2>at 
> org.apach

[jira] [Commented] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2015-12-18 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064223#comment-15064223
 ] 

Michael McCandless commented on SOLR-7865:
--

Thanks [~arcadius], your patch looks great!  I'll run tests and commit 
shortly...

> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Michael McCandless
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7865) lookup method implemented in BlendedInfixLookupFactory does not respect suggest.count

2015-12-18 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless reassigned SOLR-7865:


Assignee: Michael McCandless

> lookup method implemented in BlendedInfixLookupFactory does not respect 
> suggest.count
> -
>
> Key: SOLR-7865
> URL: https://issues.apache.org/jira/browse/SOLR-7865
> Project: Solr
>  Issue Type: Bug
>  Components: Suggester
>Affects Versions: 5.2.1
>Reporter: Arcadius Ahouansou
>Assignee: Michael McCandless
> Attachments: LUCENE_7865.patch
>
>
> The following test failes in the TestBlendedInfixSuggestions.java:
> This is mainly because {code}num * numFactor{code} get called multiple times 
> from 
> https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/java/org/apache/solr/spelling/suggest/fst/BlendedInfixLookupFactory.java#L118
> The test is expecting count=1 but we get all 3 docs out.
> {code}
>   public void testSuggestCount() {
> assertQ(req("qt", URI, "q", "the", SuggesterParams.SUGGEST_COUNT, "1", 
> SuggesterParams.SUGGEST_DICT, "blended_infix_suggest_linear"),
> 
> "//lst[@name='suggest']/lst[@name='blended_infix_suggest_linear']/lst[@name='the']/int[@name='numFound'][.='1']"
> );
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr git mirror will soon turn off

2015-12-18 Thread Dawid Weiss
I've made some comments about the conversion process here:
https://issues.apache.org/jira/browse/LUCENE-6933?focusedCommentId=15064208#comment-15064208

Feel free to try it out.
https://github.com/dweiss/lucene-solr-svn2git

I don't know what the next steps are. This looks like a good starting point
to switch over to git with all the development? The only thing I still plan
on doing is getting rid of a few large binary blobs in historical
resources, but even without it this seems acceptable size-wise (~200mb).

Dawid



On Thu, Dec 17, 2015 at 9:13 AM, Dawid Weiss  wrote:

>
> > The question I had (I am sure a very dumb one): WHY do we care about history
> preserved perfectly in Git?
>
> For me it's for sentimental, archival and task-challenge reasons. Robert's
> requirement is that git praise/blame/log works and on a given file and
> shows its true history of changes. Everyone has his own reasons I guess. If
> the initial clone is small enough then I see no problem in keeping the
> history if we can preserve it.
>
> Dawid
>
>
>
> On Thu, Dec 17, 2015 at 4:52 AM, david.w.smi...@gmail.com <
> david.w.smi...@gmail.com> wrote:
>
>> +1 totally agree.  Any way; the bloat should largely be the binaries &
>> unrelated projects, not code (small text files).
>>
>> On Wed, Dec 16, 2015 at 10:36 PM Doug Turnbull <
>> dturnb...@opensourceconnections.com> wrote:
>>
>>> In defense of more history immediately available--it is often far more
>>> useful to poke around code history/run blame to figure out some code than
>>> by taking it at face value. Putting this in a secondary place like
>>> Apache SVN repo IMO reduces the readability of the code itself. This is
>>> doubly true for new developers that won't know about Apache's SVN. And
>>> Lucene can be quite intricate code. Further in my own work poking around in
>>> github mirrors I frequently hit the current cutoff. Which is one reason I
>>> stopped using them for anything but the casual investigation.
>>>
>>> I'm not totally against a cutoff point, but I'd advocate for exhausting
>>> other options first, such as trimming out unrelated projects, binaries, etc.
>>>
>>> -Doug
>>>
>>>
>>> On Wednesday, December 16, 2015, Shawn Heisey 
>>> wrote:
>>>
 On 12/16/2015 5:53 PM, Alexandre Rafalovitch wrote:
 > On 16 December 2015 at 00:44, Dawid Weiss 
 wrote:
 >> 4) The size of JARs is really not an issue. The entire SVN repo I
 mirrored
 >> locally (including empty interim commits to cater for
 svn:mergeinfos) is 4G.
 >> If you strip the stuff like javadocs and side projects (Nutch, Tika,
 Mahout)
 >> then I bet the entire history can fit in 1G total. Of course
 stripping JARs
 >> is also doable.
 > I think this answered one of the issues. So, this is not something to
 focus on.
 >
 > The question I had (I am sure a very dumb one): WHY do we care about
 > history preserved perfectly in Git? Because that seems to be the real
 > bottleneck now. Does anybody still checks out an intermediate commit
 > in Solr 1.4 branch?

 I do not think we need every bit of history -- at least in the primary
 read/write repository.  I wonder how much of a size difference there
 would be between tossing all history before 5.0 and tossing all history
 before the ivy transition was completed.

 In the interests of reducing the size and download time of a clone
 operation, I definitely think we should trim history in the main repo to
 some arbitrary point, as long as the full history is available
 elsewhere.  It's my understanding that it will remain in svn.apache.org
 (possibly forever), and I think we could also create "historical"
 read-only git repos.

 Almost every time I am working on the code, I only care about the stable
 branch and trunk.  Sometimes I will check out an older 4.x tag so I can
 see the exact code referenced by a stacktrace in a user's error message,
 but when this is required, I am willing to go to an entirely different
 repository and chew up bandwidth/disk resourcesto obtain it, and I do
 not care whether it is git or svn.  As time marches on, fewer people
 will have reasons to look at the historical record.

 Thanks,
 Shawn


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


>>> --
>>> *Doug Turnbull **| *Search Relevance Consultant | OpenSource Connections
>>> , LLC | 240.476.9983
>>> Author: Relevant Search 
>>> This e-mail and all contents, including attachments, is considered to be
>>> Company Confidential unless explicitly stated otherwise, regardless
>>> of whether attachments are marked as such.
>>>
>>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Sp

[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read

2015-12-18 Thread Arcadius Ahouansou (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064219#comment-15064219
 ] 

Arcadius Ahouansou commented on SOLR-8146:
--

Thank you very much [~noble.paul].
I will have a look into {{snitch}}

> Allowing SolrJ CloudSolrClient to have preferred replica for query/read
> ---
>
> Key: SOLR-8146
> URL: https://issues.apache.org/jira/browse/SOLR-8146
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 5.3
>Reporter: Arcadius Ahouansou
> Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch
>
>
> h2. Backgrouds
> Currently, the CloudSolrClient randomly picks a replica to query.
> This is done by shuffling the list of live URLs to query then, picking the 
> first item from the list.
> This ticket is to allow more flexibility and control to some extend which 
> URLs will be picked up for queries.
> Note that this is for queries only and would not affect update/delete/admin 
> operations.
> h2. Implementation
> The current patch uses regex pattern and moves to the top of the list of URLs 
> only those matching the given regex specified by the system property 
> {code}solr.preferredQueryNodePattern{code}
> Initially, I thought it may be good to have Solr nodes tagged with a string 
> pattern (snitch?) and use that pattern for matching the URLs.
> Any comment, recommendation or feedback would be appreciated.
> h2. Use Cases
> There are many cases where the ability to choose the node where queries go 
> can be very handy:
> h3. Special node for manual user queries and analytics
> One may have a SolrCLoud cluster where every node host the same set of 
> collections with:  
> - multiple large SolrCLoud nodes (L) used for production apps and 
> - have 1 small node (S) in the same cluster with less ram/cpu used only for 
> manual user queries, data export and other production issue investigation.
> This ticket would allow to configure the applications using SolrJ to query 
> only the (L) nodes
> This use case is similar to the one described in SOLR-5501 raised by [~manuel 
> lenormand]
> h3. Minimizing network traffic
>  
> For simplicity, let's say that we have  a SolrSloud cluster deployed on 2 (or 
> N) separate racks: rack1 and rack2.
> On each rack, we have a set of SolrCloud VMs as well as a couple of client 
> VMs querying solr using SolrJ.
> All solr nodes are identical and have the same number of collections.
> What we would like to achieve is:
> - clients on rack1 will by preference query only SolrCloud nodes on rack1, 
> and 
> - clients on rack2 will by preference query only SolrCloud nodes on rack2.
> - Cross-rack read will happen if and only if one of the racks has no 
> available Solr node to serve a request.
> In other words, we want read operations to be local to a rack whenever 
> possible.
> Note that write/update/delete/admin operations should not be affected.
> Note that in our use case, we have a cross DC deployment. So, replace 
> rack1/rack2 by DC1/DC2
> Any comment would be very appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064208#comment-15064208
 ] 

Dawid Weiss edited comment on LUCENE-6933 at 12/18/15 5:01 PM:
---

I pushed a test repo with merged history to:
https://github.com/dweiss/lucene-solr-svn2git

A few remarks.

* I left only branches {{branch_3x}}, {{branch_4x}} and {{branch_5x}} as active 
branches. {{trunk}} becomes {{master}}.
* The {{master}}'s history is not entirely up to date; we can fill in remaining 
commits by fast-forwarding the remaining commits manually if we switch to git.
* All the historical branches are tags under {{historical/branches/*}}, invoke 
{{git tag}} to see the list of tags.
* All releases are tagged in a consistent manner as 
{{releases/lucene,solr,lucene-solr/number}}. Previous "tags" from SVN are 
available under historical tags (see above).
* You can see "graft points" in history where Solr's, Lucene and Lucene-Solr's 
history is merged, see tags {{grafts/*}}.
* The size of .git repo with all JARs inside was 455mb. I truncated all the 
JARs to 0 bytes (but left their filenames in history), the size of git repo 
after this dropped to 214mb. There are still some large binary blobs (Kuromoji 
dictionaries, europarl, etc.). I'll see if I can reduce it even more, but this 
seems acceptable already.
* There are some oddball file permission issues on Windows.  Use {{git config 
core.filemode false}} to ignore.
* Checkout master and issue {{git log --follow 
lucene/core/src/java/org/apache/lucene/index/IndexWriter.java}}.
* The blame history may *not* be identical due to differences in how git and 
svn handle merges, etc., but the history of each file should be fairly accurate.
* {{gitk --all}} makes a very interesting reading.


was (Author: dweiss):
I pushed a test repo with merged history to:
https://github.com/dweiss/lucene-solr-svn2git

A few remarks.

* I left only branches {{branch_3x}}, {{branch_4x}} and {{branch_5x}} as active 
branches. {{trunk}} becomes {{master}}.
* The {{master}}'s history is not entirely up to date; we can fill in remaining 
commits by fast-forwarding the remaining commits manually if we switch to git.
* All the historical branches are tags under {{historical/branches/*}}, invoke 
{{git tag}} to see the list of tags.
* All releases are tagged in a consistent manner as 
{{releases/lucene,solr,lucene-solr/number}}. Previous "tags" from SVN are 
available under historical tags (see above).
* You can see "graft points" in history where Solr's, Lucene and Lucene-Solr's 
history is merged, see tags {{grafts/*}}.
* The size of .git repo with all JARs inside was 455mb. I truncated all the 
JARs to 0 bytes (but left their filenames in history), the size of git repo 
after this dropped to 214mb. There are still some large binary blobs (Kuromoji 
dictionaries, europarl, etc.). I'll see if I can reduce it even more, but this 
seems acceptable already.
* There are some oddball file permission issues on Windows.  Use {{git config 
core.filemode false}} to ignore.
* Checkout master and issue {{git log --follow 
lucene/core/src/java/org/apache/lucene/index/IndexWriter.java}}.
* The blame history may *not* be identical due to differences in how git and 
svn handle merges, etc., but the history of each file should be fairly accurate.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is n

[jira] [Commented] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064208#comment-15064208
 ] 

Dawid Weiss commented on LUCENE-6933:
-

I pushed a test repo with merged history to:
https://github.com/dweiss/lucene-solr-svn2git

A few remarks.

* I left only branches {{branch_3x}}, {{branch_4x}} and {{branch_5x}} as active 
branches. {{trunk}} becomes {{master}}.
* The {{master}}'s history is not entirely up to date; we can fill in remaining 
commits by fast-forwarding the remaining commits manually if we switch to git.
* All the historical branches are tags under {{historical/branches/*}}, invoke 
{{git tag}} to see the list of tags.
* All releases are tagged in a consistent manner as 
{{releases/lucene,solr,lucene-solr/number}}. Previous "tags" from SVN are 
available under historical tags (see above).
* You can see "graft points" in history where Solr's, Lucene and Lucene-Solr's 
history is merged, see tags {{grafts/*}}.
* The size of .git repo with all JARs inside was 455mb. I truncated all the 
JARs to 0 bytes (but left their filenames in history), the size of git repo 
after this dropped to 214mb. There are still some large binary blobs (Kuromoji 
dictionaries, europarl, etc.). I'll see if I can reduce it even more, but this 
seems acceptable already.
* There are some oddball file permission issues on Windows.  Use {{git config 
core.filemode false}} to ignore.
* Checkout master and issue {{git log --follow 
lucene/core/src/java/org/apache/lucene/index/IndexWriter.java}}.
* The blame history may *not* be identical due to differences in how git and 
svn handle merges, etc., but the history of each file should be fairly accurate.

> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 885 - Still Failing

2015-12-18 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/885/

1 tests failed.
FAILED:  
org.apache.lucene.search.suggest.document.TestSuggestField.testSuggestOnMostlyDeletedDocuments

Error Message:
MockDirectoryWrapper: cannot close: there are still open files: 
{_y_completion_0.pay=1, _y_completion_0.tim=1, _y.dim=1, _y.fdt=1, 
_y_completion_0.pos=1, _y.nvd=1, _y_completion_0.doc=1, _y_completion_0.lkp=1}

Stack Trace:
java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 
open files: {_y_completion_0.pay=1, _y_completion_0.tim=1, _y.dim=1, _y.fdt=1, 
_y_completion_0.pos=1, _y.nvd=1, _y_completion_0.doc=1, _y_completion_0.lkp=1}
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:771)
at 
org.apache.lucene.search.suggest.document.TestSuggestField.after(TestSuggestField.java:79)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:929)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: unclosed IndexInput: _y_completion_0.pos
at 
org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:659)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:703)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsReader.(Lucene50PostingsReader.java:88)
at 
org.apache.lucene.codecs.lucene50.Lucene50PostingsFormat.fieldsProducer(Lucene50PostingsFormat.java:443)
at 
org.apache.lucene.search.suggest.document.CompletionFieldsProducer.(CompletionFieldsProducer.java:92)

[JENKINS] Lucene-Solr-trunk-Solaris (64bit/jdk1.8.0) - Build # 269 - Failure!

2015-12-18 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Solaris/269/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.lucene.search.TestDimensionalRangeQuery.testRandomLongsMedium

Error Message:
Captured an uncaught exception in thread: Thread[id=350, name=T1, 
state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=350, name=T1, state=RUNNABLE, 
group=TGRP-TestDimensionalRangeQuery]
Caused by: java.lang.AssertionError: T1: iter=0 id=3839 docID=4 
value=-6512425785367661192 (range: -7507609947746435620 TO 2010628042616866333) 
expected true but got: false deleted?=false 
query=DimensionalRangeQuery:field=sn_value:[[[B@23428fab] TO [[B@277d0f1b]]
at __randomizedtesting.SeedInfo.seed([976DF173CE0BE8A7]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$1._run(TestDimensionalRangeQuery.java:357)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:265)


FAILED:  org.apache.lucene.search.TestDimensionalRangeQuery.testAllEqual

Error Message:
Captured an uncaught exception in thread: Thread[id=358, name=T3, 
state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=358, name=T3, state=RUNNABLE, 
group=TGRP-TestDimensionalRangeQuery]
Caused by: java.lang.AssertionError: T3: iter=1 id=9238 docID=0 
value=-8430619521360666154 (range: -8704842830894929849 TO 
-2114881473162340490) expected true but got: false deleted?=false 
query=DimensionalRangeQuery:field=ss_value:[[[B@1db6c503] TO [[B@72cda9ca]]
at __randomizedtesting.SeedInfo.seed([976DF173CE0BE8A7]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$1._run(TestDimensionalRangeQuery.java:357)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$1.run(TestDimensionalRangeQuery.java:265)


FAILED:  
org.apache.lucene.search.TestDimensionalRangeQuery.testRandomBinaryMedium

Error Message:
Captured an uncaught exception in thread: Thread[id=361, name=T0, 
state=RUNNABLE, group=TGRP-TestDimensionalRangeQuery]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=361, name=T0, state=RUNNABLE, 
group=TGRP-TestDimensionalRangeQuery]
at 
__randomizedtesting.SeedInfo.seed([976DF173CE0BE8A7:E04076331AA79D70]:0)
Caused by: java.lang.AssertionError: 472 hits were wrong
at __randomizedtesting.SeedInfo.seed([976DF173CE0BE8A7]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$2._run(TestDimensionalRangeQuery.java:637)
at 
org.apache.lucene.search.TestDimensionalRangeQuery$2.run(TestDimensionalRangeQuery.java:533)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.search.TestDimensionalRangeQuery

Error Message:
The test or suite printed 206572 bytes to stdout and stderr, even though the 
limit was set to 8192 bytes. Increase the limit with @Limit, ignore it 
completely with @SuppressSysoutChecks or run with -Dtests.verbose=true

Stack Trace:
java.lang.AssertionError: The test or suite printed 206572 bytes to stdout and 
stderr, even though the limit was set to 8192 bytes. Increase the limit with 
@Limit, ignore it completely with @SuppressSysoutChecks or run with 
-Dtests.verbose=true
at __randomizedtesting.SeedInfo.seed([976DF173CE0BE8A7]:0)
at 
org.apache.lucene.util.TestRuleLimitSysouts.afterIfSuccessful(TestRuleLimitSysouts.java:212)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterIfSuccessful(TestRuleAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:37)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 1685 lines...]
   [junit4] Suite: org.apache.lucene.search.TestDimensionalRangeQuery
   [junit4]   2> des. 18, 2015 11:56:09 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
   [junit4]   2> WARNING: 

[jira] [Comment Edited] (LUCENE-6933) Create a (cleaned up) SVN history in git

2015-12-18 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15061015#comment-15061015
 ] 

Dawid Weiss edited comment on LUCENE-6933 at 12/18/15 4:53 PM:
---

After some more digging and experiments it seems realistic that the following 
multi-step process will get us the goals above.
* (/) create local SVN repo with the above, preserving dummy commits so that 
version numbers match Apache's SVN
* (/) use {{git-svn}} to mirror (separately) {{lucene/java/*}}, 
{{lucene/dev/*}} and Solr's pre-merge history.
* (/) import those separate history trees into one git repo, use grafts and 
branch filtering to stitch them together.
* (/) use https://rtyley.github.io/bfg-repo-cleaner/ to remove/ truncate binary 
blobs on the git repo
* (/) do any finalizing cleanups (clean up any junk branches, tags, add actual 
release tags throughout the history).

I'll proceed and try to do all the above locally. If it works, I'll push a 
"test" repo to github so that folks can inspect. Everything takes ages. 
Patience.



was (Author: dweiss):
After some more digging and experiments it seems realistic that the following 
multi-step process will get us the goals above.
* (/) create local SVN repo with the above, preserving dummy commits so that 
version numbers match Apache's SVN
* (/) use {{git-svn}} to mirror (separately) {{lucene/java/*}}, 
{{lucene/dev/*}} and Solr's pre-merge history.
* (/) import those separate history trees into one git repo, use grafts and 
branch filtering to stitch them together.
* use https://rtyley.github.io/bfg-repo-cleaner/ to remove/ truncate binary 
blobs on the git repo
* do any finalizing cleanups (clean up any junk branches, tags, add actual 
release tags throughout the history).

I'll proceed and try to do all the above locally. If it works, I'll push a 
"test" repo to github so that folks can inspect. Everything takes ages. 
Patience.


> Create a (cleaned up) SVN history in git
> 
>
> Key: LUCENE-6933
> URL: https://issues.apache.org/jira/browse/LUCENE-6933
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
> Attachments: multibranch-commits.log
>
>
> Goals:
> * selectively drop projects and core-irrelevant stuff:
>   ** {{lucene/site}}
>   ** {{lucene/nutch}}
>   ** {{lucene/lucy}}
>   ** {{lucene/tika}}
>   ** {{lucene/hadoop}}
>   ** {{lucene/mahout}}
>   ** {{lucene/pylucene}}
>   ** {{lucene/lucene.net}}
>   ** {{lucene/old_versioned_docs}}
>   ** {{lucene/openrelevance}}
>   ** {{lucene/board-reports}}
>   ** {{lucene/java/site}}
>   ** {{lucene/java/nightly}}
>   ** {{lucene/dev/nightly}}
>   ** {{lucene/dev/lucene2878}}
>   ** {{lucene/sandbox/luke}}
>   ** {{lucene/solr/nightly}}
> * preserve the history of all changes to core sources (Solr and Lucene).
>   ** {{lucene/java}}
>   ** {{lucene/solr}}
>   ** {{lucene/dev/trunk}}
>   ** {{lucene/dev/branches/branch_3x}}
>   ** {{lucene/dev/branches/branch_4x}}
>   ** {{lucene/dev/branches/branch_5x}}
> * provide a way to link git commits and history with svn revisions (amend the 
> log message).
> * annotate release tags
> * deal with large binary blobs (JARs): keep empty files instead for their 
> historical reference only.
> Non goals:
> * no need to preserve "exact" merge history from SVN (see "impossible" below).
> * Ability to build ancient versions is not an issue.
> Impossible:
> * It is not possible to preserve SVN "merge history" because of the following 
> reasons:
>   ** Each commit in SVN operates on individual files. So one commit can 
> "copy" (and record a merge) files from anywhere in the object tree, even 
> modifying them along the way. There simply is no equivalent for this in git. 
>   ** There are historical commits in SVN that apply changes to multiple 
> branches in one commit ({{r1569975}}) and merges *from* multiple branches in 
> one commit ({{r940806}}).
> * Because exact merge tracking is impossible then what follows is that exact 
> "linearized" history of a given file is also impossible to record. Let's say 
> changes X, Y and Z have been applied to a branch of a file A and then merged 
> back. In git, this would be reflected as a single commit flattening X, Y and 
> Z (on the target branch) and three independent commits on the branch. The 
> "copy-from" link from one branch to another cannot be represented because, as 
> mentioned, merges are done on entire branches in git, not on individual 
> files. Yes, there are commits in SVN history that have selective file merges 
> (not entire branches).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands

[jira] [Resolved] (SOLR-8230) Create Facet Telemetry for Nested Facet Query

2015-12-18 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-8230.

   Resolution: Fixed
Fix Version/s: 5.5

Committed.  Thanks Michael!
I also added a simple test to just test for the presence of "facet-info" and 
also randomly added it in the main TestJsonFacets test just to ensure that it 
didn't cause exceptions or other issues for all the various facet types.

>From a style perspective, I also moved the license to the top of the new file.

> Create Facet Telemetry for Nested Facet Query
> -
>
> Key: SOLR-8230
> URL: https://issues.apache.org/jira/browse/SOLR-8230
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: 5.5, Trunk
>
> Attachments: SOLR-8230.patch, SOLR-8230.patch, SOLR-8230.patch, 
> SOLR-8230.patch, SOLR-8230.patch
>
>
> This is the first step for SOLR-8228 Facet Telemetry. It's going to implement 
> the telemetry for a nested facet query and put the information obtained in 
> debug field in response.
> Here is an example of telemetry returned from query. 
> Query
> {code}
> curl http://localhost:8228/solr/films/select -d 
> 'q=*:*&wt=json&indent=true&debugQuery=true&json.facet={
> top_genre: {
>   type:terms,
>   field:genre,
>   numBucket:true,
>   limit:2,
>   facet: {
> top_director: {
> type:terms,
> field:directed_by,
> numBuckets:true,
> limit:2
> },
> first_release: {
> type:terms,
> field:initial_release_date,
> sort:{index:asc},
> numBuckets:true,
> limit:2
> }
>   }
> }
> }'
> {code}
> Telemetry returned (inside debug part)
> {code}
> "facet-trace":{
>   "processor":"FacetQueryProcessor",
>   "elapse":1,
>   "query":null,
>   "sub-facet":[{
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":1,
>   "field":"genre",
>   "limit":2,
>   "sub-facet":[{
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2}]}]},
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-8443) Change /stream handler http param from "stream" to "func"

2015-12-18 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-8443:


 Summary: Change /stream handler http param from "stream" to "func"
 Key: SOLR-8443
 URL: https://issues.apache.org/jira/browse/SOLR-8443
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Reporter: Joel Bernstein
Priority: Minor


When passing in a Streaming Expression to the /stream handler you currently use 
the "stream" http parameter. This dates back to when serialized TupleStream 
objects were passed in. Now that the /stream handler only accepts Streaming 
Expressions it makes sense to rename this parameter to "func". 

This syntax also helps to emphasize that Streaming Expressions are a function 
language.

For example:

http://localhost:8983/collection1/stream?func=search(...)





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8230) Create Facet Telemetry for Nested Facet Query

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064179#comment-15064179
 ] 

ASF subversion and git services commented on SOLR-8230:
---

Commit 1720824 from [~yo...@apache.org] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1720824 ]

SOLR-8230: JSON Facet API: add facet-info to debug when debugQuery=true

> Create Facet Telemetry for Nested Facet Query
> -
>
> Key: SOLR-8230
> URL: https://issues.apache.org/jira/browse/SOLR-8230
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8230.patch, SOLR-8230.patch, SOLR-8230.patch, 
> SOLR-8230.patch, SOLR-8230.patch
>
>
> This is the first step for SOLR-8228 Facet Telemetry. It's going to implement 
> the telemetry for a nested facet query and put the information obtained in 
> debug field in response.
> Here is an example of telemetry returned from query. 
> Query
> {code}
> curl http://localhost:8228/solr/films/select -d 
> 'q=*:*&wt=json&indent=true&debugQuery=true&json.facet={
> top_genre: {
>   type:terms,
>   field:genre,
>   numBucket:true,
>   limit:2,
>   facet: {
> top_director: {
> type:terms,
> field:directed_by,
> numBuckets:true,
> limit:2
> },
> first_release: {
> type:terms,
> field:initial_release_date,
> sort:{index:asc},
> numBuckets:true,
> limit:2
> }
>   }
> }
> }'
> {code}
> Telemetry returned (inside debug part)
> {code}
> "facet-trace":{
>   "processor":"FacetQueryProcessor",
>   "elapse":1,
>   "query":null,
>   "sub-facet":[{
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":1,
>   "field":"genre",
>   "limit":2,
>   "sub-facet":[{
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2}]}]},
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8230) Create Facet Telemetry for Nested Facet Query

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064176#comment-15064176
 ] 

ASF subversion and git services commented on SOLR-8230:
---

Commit 1720823 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1720823 ]

SOLR-8230: JSON Facet API: add facet-info to debug when debugQuery=true

> Create Facet Telemetry for Nested Facet Query
> -
>
> Key: SOLR-8230
> URL: https://issues.apache.org/jira/browse/SOLR-8230
> Project: Solr
>  Issue Type: Sub-task
>  Components: Facet Module
>Reporter: Michael Sun
> Fix For: Trunk
>
> Attachments: SOLR-8230.patch, SOLR-8230.patch, SOLR-8230.patch, 
> SOLR-8230.patch, SOLR-8230.patch
>
>
> This is the first step for SOLR-8228 Facet Telemetry. It's going to implement 
> the telemetry for a nested facet query and put the information obtained in 
> debug field in response.
> Here is an example of telemetry returned from query. 
> Query
> {code}
> curl http://localhost:8228/solr/films/select -d 
> 'q=*:*&wt=json&indent=true&debugQuery=true&json.facet={
> top_genre: {
>   type:terms,
>   field:genre,
>   numBucket:true,
>   limit:2,
>   facet: {
> top_director: {
> type:terms,
> field:directed_by,
> numBuckets:true,
> limit:2
> },
> first_release: {
> type:terms,
> field:initial_release_date,
> sort:{index:asc},
> numBuckets:true,
> limit:2
> }
>   }
> }
> }'
> {code}
> Telemetry returned (inside debug part)
> {code}
> "facet-trace":{
>   "processor":"FacetQueryProcessor",
>   "elapse":1,
>   "query":null,
>   "sub-facet":[{
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":1,
>   "field":"genre",
>   "limit":2,
>   "sub-facet":[{
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Drama",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorUIF",
>   "elapse":0,
>   "field":"directed_by",
>   "limit":2},
> {
>   "filter":"genre:Comedy",
>   "processor":"FacetFieldProcessorNumeric",
>   "elapse":0,
>   "field":"initial_release_date",
>   "limit":2}]}]},
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8317) add responseHeader and response accessors to SolrQueryResponse

2015-12-18 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064174#comment-15064174
 ] 

ASF subversion and git services commented on SOLR-8317:
---

Commit 1720822 from [~cpoerschke] in branch 'dev/trunk'
[ https://svn.apache.org/r1720822 ]

SOLR-8317: add responseHeader and response accessors to SolrQueryResponse. 
TestSolrQueryResponse tests for accessors.

> add responseHeader and response accessors to SolrQueryResponse
> --
>
> Key: SOLR-8317
> URL: https://issues.apache.org/jira/browse/SOLR-8317
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-8317-part1of2.patch, SOLR-8317.patch
>
>
> To make code easier to understand and modify. Proposed patch against trunk to 
> follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8435) Long update times Solr 5.3.1

2015-12-18 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15064152#comment-15064152
 ] 

Erick Erickson commented on SOLR-8435:
--

Check that you aren't somehow building suggesters on commit.

> Long update times Solr 5.3.1
> 
>
> Key: SOLR-8435
> URL: https://issues.apache.org/jira/browse/SOLR-8435
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Affects Versions: 5.3.1
> Environment: Ubuntu server 128Gb
>Reporter: Kenny Knecht
> Fix For: 5.2.1
>
>
> We have 2 128GB ubuntu servers in solr cloud config. We update by curling 
> json files of 20,000 documents. In 5.2.1 this consistently takes between 19 
> and 24 seconds. In 5.3.1 most times this takes 20s but in about 20% of the 
> files this takes much longer: up to 500s! Which files seems to be quite 
> random. Is this a known bug? any workaround? fixed in 5.4?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >