[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_67) - Build # 4442 - Still Failing!

2014-11-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4442/
Java: 64bit/jdk1.7.0_67 -XX:-UseCompressedOops -XX:+UseSerialGC (asserts: true)

1 tests failed.
REGRESSION:  org.apache.solr.cloud.RollingRestartTest.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 330 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 330 
seconds
at 
__randomizedtesting.SeedInfo.seed([D97278EC125C119A:5894F6F4650371A6]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:178)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:137)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:132)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:828)
at 
org.apache.solr.cloud.RollingRestartTest.doTest(RollingRestartTest.java:62)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor96.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40-ea-b09) - Build # 11477 - Still Failing!

2014-11-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11477/
Java: 64bit/jdk1.8.0_40-ea-b09 -XX:+UseCompressedOops -XX:+UseG1GC (asserts: 
false)

2 tests failed.
REGRESSION:  org.apache.solr.DistributedIntervalFacetingTest.testDistribSearch

Error Message:
Expected mime type application/octet-stream but got text/html. html head 
meta http-equiv=Content-Type content=text/html;charset=ISO-8859-1/ 
titleError 500 {msg=SolrCore 'collection1' is not available due to init 
failure: Error instantiating class: 
'org.apache.lucene.util.LuceneTestCase$3',trace=org.apache.solr.common.SolrException:
 SolrCore 'collection1' is not available due to init failure: Error 
instantiating class: 'org.apache.lucene.util.LuceneTestCase$3'  at 
org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:765)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:294)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:137)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)  
at org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301)  
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116) 
 at org.eclipse.jetty.server.Server.handle(Server.java:368)  at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
  at 
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
  at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
  at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:953)  at 
org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)  at 
org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
  at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:628)
  at 
org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543) 
 at java.lang.Thread.run(Thread.java:745) Caused by: 
org.apache.solr.common.SolrException: Error instantiating class: 
'org.apache.lucene.util.LuceneTestCase$3'  at 
org.apache.solr.core.SolrCore.lt;initgt;(SolrCore.java:893)  at 
org.apache.solr.core.SolrCore.lt;initgt;(SolrCore.java:652)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:509)  at 
org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:273)  at 
org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:267)  at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 ... 1 more Caused by: org.apache.solr.common.SolrException: Error 
instantiating class: 'org.apache.lucene.util.LuceneTestCase$3'  at 
org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:533)
  at 
org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:517)
  at 
org.apache.solr.update.SolrIndexConfig.buildMergeScheduler(SolrIndexConfig.java:289)
  at 
org.apache.solr.update.SolrIndexConfig.toIndexWriterConfig(SolrIndexConfig.java:214)
  at 
org.apache.solr.update.SolrIndexWriter.lt;initgt;(SolrIndexWriter.java:78)  
at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)  at 
org.apache.solr.core.SolrCore.initIndex(SolrCore.java:529)  at 
org.apache.solr.core.SolrCore.lt;initgt;(SolrCore.java:796)  ... 8 more 
Caused by: java.lang.IllegalAccessException: Class 
org.apache.solr.core.SolrResourceLoader can not access a member of class 
org.apache.lucene.util.LuceneTestCase$3 with modifiers   at 
sun.reflect.Reflection.ensureMemberAccess(Reflection.java:101)  at 
java.lang.Class.newInstance(Class.java:431)  at 
org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:529)
  ... 15 more 

Re: [JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2215 - Still Failing

2014-11-20 Thread Alan Woodward
I committed a fix.  There's now a check in newRandomConfig() to see if there's 
a $ in the merge scheduler class name, and if there is it just uses CMS 
instead.

Alan Woodward
www.flax.co.uk


On 19 Nov 2014, at 19:07, Alan Woodward wrote:

 So digging in…  Solr instantiates the merge scheduler via it's 
 ResourceLoader, which takes a class name.  The random indexconfig snippet 
 sets the classname to whatever the value of ${solr.tests.mergeScheduler} is.  
 This is set in SolrTestCaseJ4.newRandomConfig():
 
 System.setProperty(solr.tests.mergeScheduler, 
 iwc.getMergeScheduler().getClass().getName());
 
 And I guess you can't call Class.newInstance() on an anonymous class?
 
 Alan Woodward
 www.flax.co.uk
 
 
 On 19 Nov 2014, at 18:10, Michael McCandless wrote:
 
 Oh, I also saw this before committing, was confused, ran ant clean
 test in solr directory, and it passed, so I thought ant clean fixed
 it ... I guess not.
 
 With this change, in LuceneTestCase's newIndexWriterConfig, I
 sometimes randomly subclass ConcurrentMergeScheduler (to turn off
 merge throttling) in the random IWC that's returned.  Does this make
 Solr unhappy?  Why is Solr trying to instantiate the merge scheduler
 class that's already instantiated on IWC?  I'm confused...
 
 Mike McCandless
 
 http://blog.mikemccandless.com
 
 
 On Wed, Nov 19, 2014 at 1:00 PM, Alan Woodward a...@flax.co.uk wrote:
 I think this might be to do with Mike's changes in r1640457, but for some
 reason I can't up from svn or the apache git repo at the moment so I'm not
 certain.
 
 Alan Woodward
 www.flax.co.uk
 
 
 On 19 Nov 2014, at 17:05, Chris Hostetter wrote:
 
 
 Apologies -- I haven't been following the commits closely this week.
 
 Does anyone have any idea what changed at the low levels of the Solr
 testing class hierarchy to cause these failures in a variety of tests?
 
 : SolrCore 'collection1' is not available due to init failure: Error
 : instantiating class: 'org.apache.lucene.util.LuceneTestCase$3'
 
 : Caused by: org.apache.solr.common.SolrException: Error instantiating
 class: 'org.apache.lucene.util.LuceneTestCase$3'
 : at
 org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:532)
 : at
 org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:517)
 : at
 org.apache.solr.update.SolrIndexConfig.buildMergeScheduler(SolrIndexConfig.java:289)
 : at
 org.apache.solr.update.SolrIndexConfig.toIndexWriterConfig(SolrIndexConfig.java:214)
 : at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
 : at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
 : at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:529)
 : at org.apache.solr.core.SolrCore.init(SolrCore.java:796)
 : ... 8 more
 : Caused by: java.lang.IllegalAccessException: Class
 org.apache.solr.core.SolrResourceLoader can not access a member of class
 org.apache.lucene.util.LuceneTestCase$3 with modifiers 
 : at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)
 : at java.lang.Class.newInstance(Class.java:368)
 : at
 org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:529)
 : ... 15 more
 
 :[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=SampleTest
 -Dtests.method=testSimple -Dtests.seed=2E6E8F9ADADFEACF -Dtests.multiplier=2
 -Dtests.slow=true -Dtests.locale=ja_JP_JP_#u-ca-japanese
 -Dtests.timezone=Europe/Lisbon -Dtests.asserts=true
 -Dtests.file.encoding=US-ASCII
 
 
 -Hoss
 http://www.lucidworks.com/
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org
 
 



Re: [JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2215 - Still Failing

2014-11-20 Thread Michael McCandless
Thanks Alan!

Mike McCandless

http://blog.mikemccandless.com


On Thu, Nov 20, 2014 at 5:12 AM, Alan Woodward a...@flax.co.uk wrote:
 I committed a fix.  There's now a check in newRandomConfig() to see if
 there's a $ in the merge scheduler class name, and if there is it just
 uses CMS instead.

 Alan Woodward
 www.flax.co.uk


 On 19 Nov 2014, at 19:07, Alan Woodward wrote:

 So digging in…  Solr instantiates the merge scheduler via it's
 ResourceLoader, which takes a class name.  The random indexconfig snippet
 sets the classname to whatever the value of ${solr.tests.mergeScheduler} is.
 This is set in SolrTestCaseJ4.newRandomConfig():

 System.setProperty(solr.tests.mergeScheduler,
 iwc.getMergeScheduler().getClass().getName());

 And I guess you can't call Class.newInstance() on an anonymous class?

 Alan Woodward
 www.flax.co.uk


 On 19 Nov 2014, at 18:10, Michael McCandless wrote:

 Oh, I also saw this before committing, was confused, ran ant clean

 test in solr directory, and it passed, so I thought ant clean fixed

 it ... I guess not.


 With this change, in LuceneTestCase's newIndexWriterConfig, I

 sometimes randomly subclass ConcurrentMergeScheduler (to turn off

 merge throttling) in the random IWC that's returned.  Does this make

 Solr unhappy?  Why is Solr trying to instantiate the merge scheduler

 class that's already instantiated on IWC?  I'm confused...


 Mike McCandless


 http://blog.mikemccandless.com



 On Wed, Nov 19, 2014 at 1:00 PM, Alan Woodward a...@flax.co.uk wrote:

 I think this might be to do with Mike's changes in r1640457, but for some

 reason I can't up from svn or the apache git repo at the moment so I'm not

 certain.


 Alan Woodward

 www.flax.co.uk



 On 19 Nov 2014, at 17:05, Chris Hostetter wrote:



 Apologies -- I haven't been following the commits closely this week.


 Does anyone have any idea what changed at the low levels of the Solr

 testing class hierarchy to cause these failures in a variety of tests?


 : SolrCore 'collection1' is not available due to init failure: Error

 : instantiating class: 'org.apache.lucene.util.LuceneTestCase$3'


 : Caused by: org.apache.solr.common.SolrException: Error instantiating

 class: 'org.apache.lucene.util.LuceneTestCase$3'

 : at

 org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:532)

 : at

 org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:517)

 : at

 org.apache.solr.update.SolrIndexConfig.buildMergeScheduler(SolrIndexConfig.java:289)

 : at

 org.apache.solr.update.SolrIndexConfig.toIndexWriterConfig(SolrIndexConfig.java:214)

 : at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)

 : at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)

 : at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:529)

 : at org.apache.solr.core.SolrCore.init(SolrCore.java:796)

 : ... 8 more

 : Caused by: java.lang.IllegalAccessException: Class

 org.apache.solr.core.SolrResourceLoader can not access a member of class

 org.apache.lucene.util.LuceneTestCase$3 with modifiers 

 : at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)

 : at java.lang.Class.newInstance(Class.java:368)

 : at

 org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:529)

 : ... 15 more


 :[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=SampleTest

 -Dtests.method=testSimple -Dtests.seed=2E6E8F9ADADFEACF -Dtests.multiplier=2

 -Dtests.slow=true -Dtests.locale=ja_JP_JP_#u-ca-japanese

 -Dtests.timezone=Europe/Lisbon -Dtests.asserts=true

 -Dtests.file.encoding=US-ASCII



 -Hoss

 http://www.lucidworks.com/


 -

 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org

 For additional commands, e-mail: dev-h...@lucene.apache.org




 -

 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org

 For additional commands, e-mail: dev-h...@lucene.apache.org





-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6769) Election bug

2014-11-20 Thread Alexander S. (JIRA)
Alexander S. created SOLR-6769:
--

 Summary: Election bug
 Key: SOLR-6769
 URL: https://issues.apache.org/jira/browse/SOLR-6769
 Project: Solr
  Issue Type: Bug
Reporter: Alexander S.
Priority: Critical


Hello, I have a very simple set up: 2 shards and 2 replicas (4 nodes in total).

What I did is just stopped the shards, but if first shard stopped immediately 
the second one took about 5 minutes to stop. You can see on the screenshot what 
happened next. In short:
1. Shard 1 stopped normally
3. Replica 1 became a leader
2. Shard 2 still was performing some job but wasn't accepting connection
4. Replica 2 did not became a leader because Shard 2 is still there but doesn't 
work
5. Entire cluster went down until Shard 2 stopped and Replica 2 became a leader

Marked as critical because this shuts down the entire cluster. Please adjust if 
I am wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6769) Election bug

2014-11-20 Thread Alexander S. (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexander S. updated SOLR-6769:
---
Attachment: Screenshot 876.png

[^Screenshot 876.png]

 Election bug
 

 Key: SOLR-6769
 URL: https://issues.apache.org/jira/browse/SOLR-6769
 Project: Solr
  Issue Type: Bug
Reporter: Alexander S.
Priority: Critical
 Attachments: Screenshot 876.png


 Hello, I have a very simple set up: 2 shards and 2 replicas (4 nodes in 
 total).
 What I did is just stopped the shards, but if first shard stopped immediately 
 the second one took about 5 minutes to stop. You can see on the screenshot 
 what happened next. In short:
 1. Shard 1 stopped normally
 3. Replica 1 became a leader
 2. Shard 2 still was performing some job but wasn't accepting connection
 4. Replica 2 did not became a leader because Shard 2 is still there but 
 doesn't work
 5. Entire cluster went down until Shard 2 stopped and Replica 2 became a 
 leader
 Marked as critical because this shuts down the entire cluster. Please adjust 
 if I am wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6658) SearchHandler should accept POST requests with JSON data in content stream for customized plug-in components

2014-11-20 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219208#comment-14219208
 ] 

Mikhail Khludnev commented on SOLR-6658:


[~markpeng] I agree with the claim. Rejecting POSTs for search seems really odd 
to me. Just a consideration to simplify the migration. You can put the fixed, 
let's say, PostTolerantSearchHandler.java into your codebase and refer to this 
class from solrconfig.xml, it smells for sure, but should work with the new 
version!

 SearchHandler should accept POST requests with JSON data in content stream 
 for customized plug-in components
 

 Key: SOLR-6658
 URL: https://issues.apache.org/jira/browse/SOLR-6658
 Project: Solr
  Issue Type: Improvement
  Components: search, SearchComponents - other
Affects Versions: 4.7, 4.7.1, 4.7.2, 4.8, 4.8.1, 4.9, 4.9.1, 4.10, 4.10.1
Reporter: Mark Peng
 Attachments: SOLR-6658.patch


 This issue relates to the following one:
 *Return HTTP error on POST requests with no Content-Type*
 [https://issues.apache.org/jira/browse/SOLR-5517]
 The original consideration of the above is to make sure that incoming POST 
 requests to SearchHandler have corresponding content-type specified. That is 
 quite reasonable, however, the following lines in the patch cause to reject 
 all POST requests with content stream data, which is not necessary to that 
 issue:
 {code}
 Index: solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java
 ===
 --- solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (revision 1546817)
 +++ solr/core/src/java/org/apache/solr/handler/component/SearchHandler.java   
 (working copy)
 @@ -22,9 +22,11 @@
  import java.util.List;
  
  import org.apache.solr.common.SolrException;
 +import org.apache.solr.common.SolrException.ErrorCode;
  import org.apache.solr.common.params.CommonParams;
  import org.apache.solr.common.params.ModifiableSolrParams;
  import org.apache.solr.common.params.ShardParams;
 +import org.apache.solr.common.util.ContentStream;
  import org.apache.solr.core.CloseHook;
  import org.apache.solr.core.PluginInfo;
  import org.apache.solr.core.SolrCore;
 @@ -165,6 +167,10 @@
{
  // int sleep = req.getParams().getInt(sleep,0);
  // if (sleep  0) {log.error(SLEEPING for  + sleep);  
 Thread.sleep(sleep);}
 +if (req.getContentStreams() != null  
 req.getContentStreams().iterator().hasNext()) {
 +  throw new SolrException(ErrorCode.BAD_REQUEST, Search requests cannot 
 accept content streams);
 +}
 +
  ResponseBuilder rb = new ResponseBuilder(req, rsp, components);
  if (rb.requestInfo != null) {
rb.requestInfo.setResponseBuilder(rb);
 {code}
 We are using Solr 4.5.1 in our production services and considering to upgrade 
 to 4.9/5.0 to support more features. But due to this issue, we cannot have a 
 chance to upgrade because we have some important customized SearchComponent 
 plug-ins that need to get POST data from SearchHandler to do further 
 processing.
 Therefore, we are requesting if it is possible to remove the content stream 
 constraint shown above and to let SearchHandler accept POST requests with 
 *Content-Type: application/json* to allow further components to get the data.
 Thank you.
 Best regards,
 Mark Peng



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_40-ea-b09) - Build # 11640 - Still Failing!

2014-11-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11640/
Java: 32bit/jdk1.8.0_40-ea-b09 -client -XX:+UseConcMarkSweepGC (asserts: true)

11 tests failed.
REGRESSION:  org.apache.solr.cloud.BasicDistributedZkTest.testDistribSearch

Error Message:
commitWithin did not work on node: http://127.0.0.1:60578/ccbs/vx/collection1 
expected:68 but was:67

Stack Trace:
java.lang.AssertionError: commitWithin did not work on node: 
http://127.0.0.1:60578/ccbs/vx/collection1 expected:68 but was:67
at 
__randomizedtesting.SeedInfo.seed([CCC6C5DB579E371A:4D204BC320C15726]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.BasicDistributedZkTest.doTest(BasicDistributedZkTest.java:345)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
   

[jira] [Updated] (SOLR-6763) Shard leader election thread can persist across connection loss

2014-11-20 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated SOLR-6763:

Attachment: SOLR-6763.patch

 Shard leader election thread can persist across connection loss
 ---

 Key: SOLR-6763
 URL: https://issues.apache.org/jira/browse/SOLR-6763
 Project: Solr
  Issue Type: Bug
Reporter: Alan Woodward
 Attachments: SOLR-6763.patch


 A ZK connection loss during a call to 
 ElectionContext.waitForReplicasToComeUp() will result in two leader election 
 processes for the shard running within a single node - the initial election 
 that was waiting, and another spawned by the ReconnectStrategy.  After the 
 function returns, the first election will create an ephemeral leader node.  
 The second election will then also attempt to create this node, fail, and try 
 to put itself into recovery.  It will also set the 'isLeader' value in its 
 CloudDescriptor to false.
 The first election, meanwhile, is happily maintaining the ephemeral leader 
 node.  But any updates that are sent to the shard will cause an exception due 
 to the mismatch between the cloudstate (where this node is the leader) and 
 the local CloudDescriptor leader state.
 I think the fix is straightfoward - the call to zkClient.getChildren() in 
 waitForReplicasToComeUp should be called with 'retryOnReconnect=false', 
 rather than 'true' as it is currently, because once the connection has 
 dropped we're going to launch a new election process anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_20) - Build # 4337 - Still Failing!

2014-11-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4337/
Java: 32bit/jdk1.8.0_20 -client -XX:+UseSerialGC (asserts: false)

4 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([B91B2A1CC4974F2B:38FDA404B3C82F17]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.allTests(CloudSolrServerTest.java:300)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.doTest(CloudSolrServerTest.java:124)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1264: POMs out of sync

2014-11-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1264/

13 tests failed.
FAILED:  
org.apache.solr.hadoop.MorphlineMapperTest.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at __randomizedtesting.SeedInfo.seed([1C952C4167D231BD]:0)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.before(TestRuleTemporaryFilesCleanup.java:92)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.before(TestRuleAdapter.java:26)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:35)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


REGRESSION:  org.apache.solr.schema.DocValuesTest.testDocValues

Error Message:
SolrCore 'collection1' is not available due to init failure: Error 
instantiating class: 'org.apache.lucene.util.LuceneTestCase$3'

Stack Trace:
org.apache.solr.common.SolrException: SolrCore 'collection1' is not available 
due to init failure: Error instantiating class: 
'org.apache.lucene.util.LuceneTestCase$3'
at 
__randomizedtesting.SeedInfo.seed([906C117DF4F8607E:30875B54AEC306B5]:0)
at org.apache.solr.core.CoreContainer.getCore(CoreContainer.java:765)
at org.apache.solr.util.TestHarness.getCoreInc(TestHarness.java:219)
at org.apache.solr.util.TestHarness.update(TestHarness.java:235)
at 
org.apache.solr.util.BaseTestHarness.checkUpdateStatus(BaseTestHarness.java:282)
at 
org.apache.solr.util.BaseTestHarness.validateUpdate(BaseTestHarness.java:252)
at org.apache.solr.SolrTestCaseJ4.checkUpdateU(SolrTestCaseJ4.java:677)
at org.apache.solr.SolrTestCaseJ4.assertU(SolrTestCaseJ4.java:656)
at org.apache.solr.SolrTestCaseJ4.assertU(SolrTestCaseJ4.java:650)
at org.apache.solr.schema.DocValuesTest.setUp(DocValuesTest.java:41)
Caused by: org.apache.solr.common.SolrException: Error instantiating class: 
'org.apache.lucene.util.LuceneTestCase$3'
at org.apache.solr.core.SolrCore.init(SolrCore.java:895)
at org.apache.solr.core.SolrCore.init(SolrCore.java:653)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:510)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:274)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:268)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.solr.common.SolrException: Error instantiating class: 
'org.apache.lucene.util.LuceneTestCase$3'
at 
org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:534)
at 
org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:519)
at 
org.apache.solr.update.SolrIndexConfig.buildMergeScheduler(SolrIndexConfig.java:305)
at 
org.apache.solr.update.SolrIndexConfig.toIndexWriterConfig(SolrIndexConfig.java:230)
at 
org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:530)
at org.apache.solr.core.SolrCore.init(SolrCore.java:797)
at org.apache.solr.core.SolrCore.init(SolrCore.java:653)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:510)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:274)
at org.apache.solr.core.CoreContainer$1.call(CoreContainer.java:268)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalAccessException: Class 
org.apache.solr.core.SolrResourceLoader can not access a member of class 
org.apache.lucene.util.LuceneTestCase$3 with modifiers 

[GitHub] lucene-solr pull request: Create sparklr

2014-11-20 Thread noblepaul
GitHub user noblepaul opened a pull request:

https://github.com/apache/lucene-solr/pull/105

Create sparklr

Testing out spark integration

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/noblepaul/lucene-solr patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/105.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #105


commit cfb2e8f57a78ded226b6cffd46bc88feff08018a
Author: noblepaul noble.p...@gmail.com
Date:   2014-11-20T13:54:38Z

Create sparklr

Testing out spark integration




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: Create sparklr

2014-11-20 Thread noblepaul
Github user noblepaul closed the pull request at:

https://github.com/apache/lucene-solr/pull/105


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_40-ea-b09) - Build # 11478 - Still Failing!

2014-11-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11478/
Java: 64bit/jdk1.8.0_40-ea-b09 -XX:+UseCompressedOops -XX:+UseSerialGC 
(asserts: true)

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([8D09599118A329CD]:0)


REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([8D09599118A329CD]:0)




Build Log:
[...truncated 12331 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest
   [junit4]   2 Creating dataDir: 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.ChaosMonkeySafeLeaderTest-8D09599118A329CD-001/init-core-data-001
   [junit4]   2 1499258 T6010 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (false)
   [junit4]   2 1499259 T6010 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /
   [junit4]   2 1499264 T6010 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2 1499264 T6010 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1 client port:0.0.0.0/0.0.0.0:0
   [junit4]   2 1499265 T6011 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 1499365 T6010 oasc.ZkTestServer.run start zk server on 
port:40384
   [junit4]   2 1499365 T6010 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 1499366 T6010 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 1499369 T6018 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@5cb6b6aa 
name:ZooKeeperConnection Watcher:127.0.0.1:40384 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 1499369 T6010 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 1499369 T6010 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 1499369 T6010 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 1499372 T6010 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 1499374 T6010 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 1499376 T6021 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@13c9b4ca 
name:ZooKeeperConnection Watcher:127.0.0.1:40384/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 1499376 T6010 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 1499376 T6010 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 1499376 T6010 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 1499378 T6010 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 1499379 T6010 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 1499380 T6010 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2 1499381 T6010 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 1499382 T6010 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2 1499384 T6010 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2 1499384 T6010 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2 1499386 T6010 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 1499386 T6010 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 1499387 T6010 oasc.AbstractZkTestCase.putConfig put 
/mnt/ssd/jenkins/workspace/Lucene-Solr-5.x-Linux/solr/core/src/test-files/solr/collection1/conf/stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2 1499387 T6010 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/stopwords.txt
   [junit4]   2 1499389 T6010 oasc.AbstractZkTestCase.putConfig put 

[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-11-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219419#comment-14219419
 ] 

Mark Miller commented on SOLR-3619:
---

bq. This is exactly how configurations in zookeeper function. 

Not exactly - no collections start in ZooKeeper and so you do essentially start 
with a gold master.

I think Alexandre is right. They should act as gold masters and get copied into 
place - much like you would upload a clean config set to zookeeper.



 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2217 - Still Failing

2014-11-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2217/

4 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([7CE668C4F97FB950:FD00E6DC8E20D96C]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.allTests(CloudSolrServerTest.java:300)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.doTest(CloudSolrServerTest.java:124)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrServerTest

Error Message:
ERROR: 

[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-11-20 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219428#comment-14219428
 ] 

Alexandre Rafalovitch commented on SOLR-3619:
-

bq. Using configsets in conjunction with schemaless mode (or even a config 
where the schema API is active) seems like it might not be a good idea.

This actually raises another question. If I have, for example, REST-controlled 
stop-list and have multiple collections share the configset in non-cloud mode, 
would other collections even know that the set was updated?

bq.  I'm not sure what to do about it, though.

Could we fix that by adding a *clone* parameter to *CREATE* call, if the 
current behavior has to stay?

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2215 - Still Failing

2014-11-20 Thread Mark Miller
This kind of sucks though right? What if we changed it from an anon class
in Lucene instead and then wouldn't it work in more cases and we don't lose
this new test functionality as a Lucene test module consumer?

eg

  public static final class DoesNotStallConcurrentMergeScheduler extends
  ConcurrentMergeScheduler {
@Override
protected synchronized void maybeStall() {
}
  }

Mark

On Thu Nov 20 2014 at 5:17:31 AM Michael McCandless 
luc...@mikemccandless.com wrote:

 Thanks Alan!

 Mike McCandless

 http://blog.mikemccandless.com


 On Thu, Nov 20, 2014 at 5:12 AM, Alan Woodward a...@flax.co.uk wrote:
  I committed a fix.  There's now a check in newRandomConfig() to see if
  there's a $ in the merge scheduler class name, and if there is it just
  uses CMS instead.
 
  Alan Woodward
  www.flax.co.uk
 
 
  On 19 Nov 2014, at 19:07, Alan Woodward wrote:
 
  So digging in…  Solr instantiates the merge scheduler via it's
  ResourceLoader, which takes a class name.  The random indexconfig snippet
  sets the classname to whatever the value of ${solr.tests.mergeScheduler}
 is.
  This is set in SolrTestCaseJ4.newRandomConfig():
 
  System.setProperty(solr.tests.mergeScheduler,
  iwc.getMergeScheduler().getClass().getName());
 
  And I guess you can't call Class.newInstance() on an anonymous class?
 
  Alan Woodward
  www.flax.co.uk
 
 
  On 19 Nov 2014, at 18:10, Michael McCandless wrote:
 
  Oh, I also saw this before committing, was confused, ran ant clean
 
  test in solr directory, and it passed, so I thought ant clean fixed
 
  it ... I guess not.
 
 
  With this change, in LuceneTestCase's newIndexWriterConfig, I
 
  sometimes randomly subclass ConcurrentMergeScheduler (to turn off
 
  merge throttling) in the random IWC that's returned.  Does this make
 
  Solr unhappy?  Why is Solr trying to instantiate the merge scheduler
 
  class that's already instantiated on IWC?  I'm confused...
 
 
  Mike McCandless
 
 
  http://blog.mikemccandless.com
 
 
 
  On Wed, Nov 19, 2014 at 1:00 PM, Alan Woodward a...@flax.co.uk wrote:
 
  I think this might be to do with Mike's changes in r1640457, but for some
 
  reason I can't up from svn or the apache git repo at the moment so I'm
 not
 
  certain.
 
 
  Alan Woodward
 
  www.flax.co.uk
 
 
 
  On 19 Nov 2014, at 17:05, Chris Hostetter wrote:
 
 
 
  Apologies -- I haven't been following the commits closely this week.
 
 
  Does anyone have any idea what changed at the low levels of the Solr
 
  testing class hierarchy to cause these failures in a variety of tests?
 
 
  : SolrCore 'collection1' is not available due to init failure: Error
 
  : instantiating class: 'org.apache.lucene.util.LuceneTestCase$3'
 
 
  : Caused by: org.apache.solr.common.SolrException: Error instantiating
 
  class: 'org.apache.lucene.util.LuceneTestCase$3'
 
  : at
 
  org.apache.solr.core.SolrResourceLoader.newInstance(
 SolrResourceLoader.java:532)
 
  : at
 
  org.apache.solr.core.SolrResourceLoader.newInstance(
 SolrResourceLoader.java:517)
 
  : at
 
  org.apache.solr.update.SolrIndexConfig.buildMergeScheduler(
 SolrIndexConfig.java:289)
 
  : at
 
  org.apache.solr.update.SolrIndexConfig.toIndexWriterConfig(
 SolrIndexConfig.java:214)
 
  : at org.apache.solr.update.SolrIndexWriter.init(
 SolrIndexWriter.java:77)
 
  : at org.apache.solr.update.SolrIndexWriter.create(
 SolrIndexWriter.java:64)
 
  : at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:529)
 
  : at org.apache.solr.core.SolrCore.init(SolrCore.java:796)
 
  : ... 8 more
 
  : Caused by: java.lang.IllegalAccessException: Class
 
  org.apache.solr.core.SolrResourceLoader can not access a member of class
 
  org.apache.lucene.util.LuceneTestCase$3 with modifiers 
 
  : at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)
 
  : at java.lang.Class.newInstance(Class.java:368)
 
  : at
 
  org.apache.solr.core.SolrResourceLoader.newInstance(
 SolrResourceLoader.java:529)
 
  : ... 15 more
 
 
  :[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=SampleTest
 
  -Dtests.method=testSimple -Dtests.seed=2E6E8F9ADADFEACF
 -Dtests.multiplier=2
 
  -Dtests.slow=true -Dtests.locale=ja_JP_JP_#u-ca-japanese
 
  -Dtests.timezone=Europe/Lisbon -Dtests.asserts=true
 
  -Dtests.file.encoding=US-ASCII
 
 
 
  -Hoss
 
  http://www.lucidworks.com/
 
 
  -
 
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
  -
 
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: [JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2215 - Still Failing

2014-11-20 Thread Alan Woodward
It's a hack, true.  I thought about creating the public class in 
LuceneTestCase, but it seemed weird to be changing the lucene functionality to 
work around an issue in the way Solr instantiates things.  But you're right, 
this does mean that we lose a bit of test coverage in Solr, so maybe your 
suggestion is better.

Alan Woodward
www.flax.co.uk


On 20 Nov 2014, at 14:38, Mark Miller wrote:

 This kind of sucks though right? What if we changed it from an anon class in 
 Lucene instead and then wouldn't it work in more cases and we don't lose this 
 new test functionality as a Lucene test module consumer? 
 
 eg
 
   public static final class DoesNotStallConcurrentMergeScheduler extends
   ConcurrentMergeScheduler {
 @Override
 protected synchronized void maybeStall() {
 }
   }
 
 Mark
 
 On Thu Nov 20 2014 at 5:17:31 AM Michael McCandless 
 luc...@mikemccandless.com wrote:
 Thanks Alan!
 
 Mike McCandless
 
 http://blog.mikemccandless.com
 
 
 On Thu, Nov 20, 2014 at 5:12 AM, Alan Woodward a...@flax.co.uk wrote:
  I committed a fix.  There's now a check in newRandomConfig() to see if
  there's a $ in the merge scheduler class name, and if there is it just
  uses CMS instead.
 
  Alan Woodward
  www.flax.co.uk
 
 
  On 19 Nov 2014, at 19:07, Alan Woodward wrote:
 
  So digging in…  Solr instantiates the merge scheduler via it's
  ResourceLoader, which takes a class name.  The random indexconfig snippet
  sets the classname to whatever the value of ${solr.tests.mergeScheduler} is.
  This is set in SolrTestCaseJ4.newRandomConfig():
 
  System.setProperty(solr.tests.mergeScheduler,
  iwc.getMergeScheduler().getClass().getName());
 
  And I guess you can't call Class.newInstance() on an anonymous class?
 
  Alan Woodward
  www.flax.co.uk
 
 
  On 19 Nov 2014, at 18:10, Michael McCandless wrote:
 
  Oh, I also saw this before committing, was confused, ran ant clean
 
  test in solr directory, and it passed, so I thought ant clean fixed
 
  it ... I guess not.
 
 
  With this change, in LuceneTestCase's newIndexWriterConfig, I
 
  sometimes randomly subclass ConcurrentMergeScheduler (to turn off
 
  merge throttling) in the random IWC that's returned.  Does this make
 
  Solr unhappy?  Why is Solr trying to instantiate the merge scheduler
 
  class that's already instantiated on IWC?  I'm confused...
 
 
  Mike McCandless
 
 
  http://blog.mikemccandless.com
 
 
 
  On Wed, Nov 19, 2014 at 1:00 PM, Alan Woodward a...@flax.co.uk wrote:
 
  I think this might be to do with Mike's changes in r1640457, but for some
 
  reason I can't up from svn or the apache git repo at the moment so I'm not
 
  certain.
 
 
  Alan Woodward
 
  www.flax.co.uk
 
 
 
  On 19 Nov 2014, at 17:05, Chris Hostetter wrote:
 
 
 
  Apologies -- I haven't been following the commits closely this week.
 
 
  Does anyone have any idea what changed at the low levels of the Solr
 
  testing class hierarchy to cause these failures in a variety of tests?
 
 
  : SolrCore 'collection1' is not available due to init failure: Error
 
  : instantiating class: 'org.apache.lucene.util.LuceneTestCase$3'
 
 
  : Caused by: org.apache.solr.common.SolrException: Error instantiating
 
  class: 'org.apache.lucene.util.LuceneTestCase$3'
 
  : at
 
  org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:532)
 
  : at
 
  org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:517)
 
  : at
 
  org.apache.solr.update.SolrIndexConfig.buildMergeScheduler(SolrIndexConfig.java:289)
 
  : at
 
  org.apache.solr.update.SolrIndexConfig.toIndexWriterConfig(SolrIndexConfig.java:214)
 
  : at org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
 
  : at org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
 
  : at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:529)
 
  : at org.apache.solr.core.SolrCore.init(SolrCore.java:796)
 
  : ... 8 more
 
  : Caused by: java.lang.IllegalAccessException: Class
 
  org.apache.solr.core.SolrResourceLoader can not access a member of class
 
  org.apache.lucene.util.LuceneTestCase$3 with modifiers 
 
  : at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)
 
  : at java.lang.Class.newInstance(Class.java:368)
 
  : at
 
  org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:529)
 
  : ... 15 more
 
 
  :[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=SampleTest
 
  -Dtests.method=testSimple -Dtests.seed=2E6E8F9ADADFEACF -Dtests.multiplier=2
 
  -Dtests.slow=true -Dtests.locale=ja_JP_JP_#u-ca-japanese
 
  -Dtests.timezone=Europe/Lisbon -Dtests.asserts=true
 
  -Dtests.file.encoding=US-ASCII
 
 
 
  -Hoss
 
  http://www.lucidworks.com/
 
 
  -
 
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 
  For additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 
 
  

[jira] [Commented] (LUCENE-6065) remove foreign readers from merge, fix LeafReader instead.

2014-11-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219458#comment-14219458
 ] 

Uwe Schindler commented on LUCENE-6065:
---

Hi,
I like the whole idea, one problem I see on my first review is 
FilterLeafReader2:

Currently all methods in LeafReader2 are final, and the ones to actually 
implement are protected - which is fine, I like this very much!!! The 
filter/delegator pattern used by FilterLeafReader2 can only filter on the 
protected methods - because all others are final. This may make it impossible 
to have a subclass of FilterLeafReader2 outside oal.index package, because it 
may not be able to delegate to protected methods... I am not sure if this is 
really a problem here, but we had similar issues around MTQ and its rewrite 
methods in the past. But I think the filtering works, because we actually never 
delegate to other classes only to other instances... We should investigate with 
a simple test. Because these are issues prventing users from doing the right 
thing, just because we never test it with foreign packages.

But in fact: I would really like to have the LeafReader2 impl methods 
protected!!!

 remove foreign readers from merge, fix LeafReader instead.
 

 Key: LUCENE-6065
 URL: https://issues.apache.org/jira/browse/LUCENE-6065
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6065.patch


 Currently, SegmentMerger has supported two classes of citizens being merged:
 # SegmentReader
 # foreign reader (e.g. some FilterReader)
 It does an instanceof check and executes the merge differently. In the 
 SegmentReader case: stored field and term vectors are bulk-merged, norms and 
 docvalues are transferred directly without piling up on the heap, CRC32 
 verification runs with IO locality of the data being merged, etc. Otherwise, 
 we treat it as a foreign reader and its slow.
 This is just the low-level, it gets worse as you wrap with more stuff. A 
 great example there is SortingMergePolicy: not only will it have the 
 low-level slowdowns listed above, it will e.g. cache/pile up OrdinalMaps for 
 all string docvalues fields being merged and other silliness that just makes 
 matters worse.
 Another use case is 5.0 users wishing to upgrade from fieldcache to 
 docvalues. This should be possible to implement with a simple incremental 
 transition based on a mergepolicy that uses UninvertingReader. But we 
 shouldnt populate internal fieldcache entries unnecessarily on merge and 
 spike RAM until all those segment cores are released, and other issues like 
 bulk merge of stored fields and not piling up norms should still work: its 
 completely unrelated.
 There are more problems we can fix if we clean this up, 
 checkindex/checkreader can run efficiently where it doesn't need to RAM spike 
 like merging, we can remove the checkIntegrity() method completely from 
 LeafReader, since it can always be accomplished on producers, etc. In general 
 it would be nice to just have one codepath for merging that is as efficient 
 as we can make it, and to support things like index modifications during 
 merge.
 I spent a few weeks writing 3 different implementations to fix this 
 (interface, optional abstract class, fix LeafReader), and the latter is the 
 only one i don't completely hate: I think our APIs should be efficient for 
 indexing as well as search.
 So the proposal is simple, its to instead refactor LeafReader to just require 
 the producer APIs as abstract methods (and FilterReaders should work on 
 that). The search-oriented APIs can just be final methods that defer to those.
 So we would add 5 abstract methods, but implement 10 current methods as final 
 based on those, and then merging would always be efficient.
 {code}
   // new abstract codec-based apis
   /** 
* Expert: retrieve thread-private TermVectorsReader
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract TermVectorsReader getTermVectorsReader();
   /** 
* Expert: retrieve thread-private StoredFieldsReader
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract StoredFieldsReader getFieldsReader();
   
   /** 
* Expert: retrieve underlying NormsProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract NormsProducer getNormsReader();
   
   /** 
* Expert: retrieve underlying DocValuesProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract DocValuesProducer getDocValuesReader();
   
   /** 
* Expert: retrieve underlying FieldsProducer
* @throws 

[jira] [Commented] (LUCENE-6065) remove foreign readers from merge, fix LeafReader instead.

2014-11-20 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219473#comment-14219473
 ] 

Robert Muir commented on LUCENE-6065:
-

My idea for that was, FilterLeafReader2 would implement those as final too 
actually, and expose explicit methods to wrap say, NormsReader. This is so that 
getMergeInstance() will automatically work (fast api by default), and 
subclasses won't have to wrap in two places.

Given that filterreaders would use this api, i'm not sure there is a visibility 
problem with 'protected', since they wouldnt mess with those methods. I am 
still investigating, first i am trying to cleanup the producer apis themselves 
(e.g. try to remove nullness and other things)

 remove foreign readers from merge, fix LeafReader instead.
 

 Key: LUCENE-6065
 URL: https://issues.apache.org/jira/browse/LUCENE-6065
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6065.patch


 Currently, SegmentMerger has supported two classes of citizens being merged:
 # SegmentReader
 # foreign reader (e.g. some FilterReader)
 It does an instanceof check and executes the merge differently. In the 
 SegmentReader case: stored field and term vectors are bulk-merged, norms and 
 docvalues are transferred directly without piling up on the heap, CRC32 
 verification runs with IO locality of the data being merged, etc. Otherwise, 
 we treat it as a foreign reader and its slow.
 This is just the low-level, it gets worse as you wrap with more stuff. A 
 great example there is SortingMergePolicy: not only will it have the 
 low-level slowdowns listed above, it will e.g. cache/pile up OrdinalMaps for 
 all string docvalues fields being merged and other silliness that just makes 
 matters worse.
 Another use case is 5.0 users wishing to upgrade from fieldcache to 
 docvalues. This should be possible to implement with a simple incremental 
 transition based on a mergepolicy that uses UninvertingReader. But we 
 shouldnt populate internal fieldcache entries unnecessarily on merge and 
 spike RAM until all those segment cores are released, and other issues like 
 bulk merge of stored fields and not piling up norms should still work: its 
 completely unrelated.
 There are more problems we can fix if we clean this up, 
 checkindex/checkreader can run efficiently where it doesn't need to RAM spike 
 like merging, we can remove the checkIntegrity() method completely from 
 LeafReader, since it can always be accomplished on producers, etc. In general 
 it would be nice to just have one codepath for merging that is as efficient 
 as we can make it, and to support things like index modifications during 
 merge.
 I spent a few weeks writing 3 different implementations to fix this 
 (interface, optional abstract class, fix LeafReader), and the latter is the 
 only one i don't completely hate: I think our APIs should be efficient for 
 indexing as well as search.
 So the proposal is simple, its to instead refactor LeafReader to just require 
 the producer APIs as abstract methods (and FilterReaders should work on 
 that). The search-oriented APIs can just be final methods that defer to those.
 So we would add 5 abstract methods, but implement 10 current methods as final 
 based on those, and then merging would always be efficient.
 {code}
   // new abstract codec-based apis
   /** 
* Expert: retrieve thread-private TermVectorsReader
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract TermVectorsReader getTermVectorsReader();
   /** 
* Expert: retrieve thread-private StoredFieldsReader
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract StoredFieldsReader getFieldsReader();
   
   /** 
* Expert: retrieve underlying NormsProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract NormsProducer getNormsReader();
   
   /** 
* Expert: retrieve underlying DocValuesProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract DocValuesProducer getDocValuesReader();
   
   /** 
* Expert: retrieve underlying FieldsProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal  
*/
   protected abstract FieldsProducer getPostingsReader();
   // user/search oriented public apis based on the above
   public final Fields fields();
   public final void document(int, StoredFieldVisitor);
   public final Fields getTermVectors(int);
   public final NumericDocValues getNumericDocValues(String);
   public final Bits getDocsWithField(String);
   public final BinaryDocValues 

[jira] [Commented] (SOLR-3774) /admin/mbean returning duplicate search handlers with names that map to their classes?

2014-11-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219484#comment-14219484
 ] 

ASF subversion and git services commented on SOLR-3774:
---

Commit 1640756 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1640756 ]

SOLR-3774: Fix test.

 /admin/mbean returning duplicate search handlers with names that map to their 
 classes?
 --

 Key: SOLR-3774
 URL: https://issues.apache.org/jira/browse/SOLR-3774
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-3774.patch, SOLR-3774.patch


 Offshoot of SOLR-3232...
 bq. Along with some valid entries with names equal to the request handler 
 names (/get search /browse) it also turned up one with the name 
 org.apache.solr.handler.RealTimeGetHandler and another with the name 
 org.apache.solr.handler.component.SearchHandler
 ...seems that we may have a bug with request handlers getting registered 
 multiple times, once under their real name and once using their class?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-trunk - Build # 221 - Still Failing

2014-11-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-trunk/221/

No tests ran.

Build Log:
[...truncated 51026 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker] 
   [smoker] Load release URL 
file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/dist/...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.01 sec (9.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.0.0-src.tgz...
   [smoker] 27.5 MB in 0.10 sec (280.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.tgz...
   [smoker] 63.2 MB in 0.10 sec (613.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.0.zip...
   [smoker] 72.5 MB in 0.11 sec (682.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5424 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5424 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run ant validate
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
-Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 208 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.00 sec (89.4 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.0.0-src.tgz...
   [smoker] 33.8 MB in 0.04 sec (881.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.0.0.tgz...
   [smoker] 145.7 MB in 0.18 sec (789.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.0.0.zip...
   [smoker] 151.9 MB in 0.21 sec (719.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker] verify WAR metadata/contained JAR identity/no javax.* or java.* 
classes...
   [smoker] unpack lucene-6.0.0.tgz...
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
(log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java7/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   starting Solr on port 8983 from 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java7
   [smoker] Startup failed; see log 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-trunk/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.0-java7/solr-example.log
   [smoker] 
   [smoker] Starting Solr on port 8983 from 

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1948 - Still Failing!

2014-11-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1948/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseG1GC (asserts: true)

4 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([3BF635D729EFB344:BA10BBCF5EB0D378]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.allTests(CloudSolrServerTest.java:300)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.doTest(CloudSolrServerTest.java:124)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  

[jira] [Reopened] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-11-20 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reopened SOLR-3619:
--

Re-opening to address Alexandre and Mark's feedback around configsets.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3774) /admin/mbean returning duplicate search handlers with names that map to their classes?

2014-11-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3774?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219496#comment-14219496
 ] 

ASF subversion and git services commented on SOLR-3774:
---

Commit 1640757 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1640757 ]

SOLR-3774: Fix test.

 /admin/mbean returning duplicate search handlers with names that map to their 
 classes?
 --

 Key: SOLR-3774
 URL: https://issues.apache.org/jira/browse/SOLR-3774
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Mark Miller
 Fix For: 5.0, Trunk

 Attachments: SOLR-3774.patch, SOLR-3774.patch


 Offshoot of SOLR-3232...
 bq. Along with some valid entries with names equal to the request handler 
 names (/get search /browse) it also turned up one with the name 
 org.apache.solr.handler.RealTimeGetHandler and another with the name 
 org.apache.solr.handler.component.SearchHandler
 ...seems that we may have a bug with request handlers getting registered 
 multiple times, once under their real name and once using their class?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2215 - Still Failing

2014-11-20 Thread Mark Miller
If it was just to work around Solr, I think the fix should be in Solr. But
we ship the Lucene test framework module, and Solr is not doing anything
too crazy here at all. So it makes more sense to me to make the Lucene test
module more friendly and consumable rather than doing a hack in Solr.

- Mark

On Thu Nov 20 2014 at 10:02:42 AM Alan Woodward a...@flax.co.uk wrote:

 It's a hack, true.  I thought about creating the public class in
 LuceneTestCase, but it seemed weird to be changing the lucene functionality
 to work around an issue in the way Solr instantiates things.  But you're
 right, this does mean that we lose a bit of test coverage in Solr, so maybe
 your suggestion is better.

 Alan Woodward
 www.flax.co.uk


 On 20 Nov 2014, at 14:38, Mark Miller wrote:

 This kind of sucks though right? What if we changed it from an anon class
 in Lucene instead and then wouldn't it work in more cases and we don't lose
 this new test functionality as a Lucene test module consumer?

 eg

   public static final class DoesNotStallConcurrentMergeScheduler extends
   ConcurrentMergeScheduler {
 @Override
 protected synchronized void maybeStall() {
 }
   }

 Mark

 On Thu Nov 20 2014 at 5:17:31 AM Michael McCandless 
 luc...@mikemccandless.com wrote:

 Thanks Alan!

 Mike McCandless

 http://blog.mikemccandless.com


 On Thu, Nov 20, 2014 at 5:12 AM, Alan Woodward a...@flax.co.uk wrote:
  I committed a fix.  There's now a check in newRandomConfig() to see if
  there's a $ in the merge scheduler class name, and if there is it just
  uses CMS instead.
 
  Alan Woodward
  www.flax.co.uk
 
 
  On 19 Nov 2014, at 19:07, Alan Woodward wrote:
 
  So digging in…  Solr instantiates the merge scheduler via it's
  ResourceLoader, which takes a class name.  The random indexconfig
 snippet
  sets the classname to whatever the value of
 ${solr.tests.mergeScheduler} is.
  This is set in SolrTestCaseJ4.newRandomConfig():
 
  System.setProperty(solr.tests.mergeScheduler,
  iwc.getMergeScheduler().getClass().getName());
 
  And I guess you can't call Class.newInstance() on an anonymous class?
 
  Alan Woodward
  www.flax.co.uk
 
 
  On 19 Nov 2014, at 18:10, Michael McCandless wrote:
 
  Oh, I also saw this before committing, was confused, ran ant clean
 
  test in solr directory, and it passed, so I thought ant clean fixed
 
  it ... I guess not.
 
 
  With this change, in LuceneTestCase's newIndexWriterConfig, I
 
  sometimes randomly subclass ConcurrentMergeScheduler (to turn off
 
  merge throttling) in the random IWC that's returned.  Does this make
 
  Solr unhappy?  Why is Solr trying to instantiate the merge scheduler
 
  class that's already instantiated on IWC?  I'm confused...
 
 
  Mike McCandless
 
 
  http://blog.mikemccandless.com
 
 
 
  On Wed, Nov 19, 2014 at 1:00 PM, Alan Woodward a...@flax.co.uk wrote:
 
  I think this might be to do with Mike's changes in r1640457, but for
 some
 
  reason I can't up from svn or the apache git repo at the moment so I'm
 not
 
  certain.
 
 
  Alan Woodward
 
  www.flax.co.uk
 
 
 
  On 19 Nov 2014, at 17:05, Chris Hostetter wrote:
 
 
 
  Apologies -- I haven't been following the commits closely this week.
 
 
  Does anyone have any idea what changed at the low levels of the Solr
 
  testing class hierarchy to cause these failures in a variety of tests?
 
 
  : SolrCore 'collection1' is not available due to init failure: Error
 
  : instantiating class: 'org.apache.lucene.util.LuceneTestCase$3'
 
 
  : Caused by: org.apache.solr.common.SolrException: Error instantiating
 
  class: 'org.apache.lucene.util.LuceneTestCase$3'
 
  : at
 
  org.apache.solr.core.SolrResourceLoader.newInstance(
 SolrResourceLoader.java:532)
 
  : at
 
  org.apache.solr.core.SolrResourceLoader.newInstance(
 SolrResourceLoader.java:517)
 
  : at
 
  org.apache.solr.update.SolrIndexConfig.buildMergeScheduler(
 SolrIndexConfig.java:289)
 
  : at
 
  org.apache.solr.update.SolrIndexConfig.toIndexWriterConfig(
 SolrIndexConfig.java:214)
 
  : at org.apache.solr.update.SolrIndexWriter.init(
 SolrIndexWriter.java:77)
 
  : at org.apache.solr.update.SolrIndexWriter.create(
 SolrIndexWriter.java:64)
 
  : at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:529)
 
  : at org.apache.solr.core.SolrCore.init(SolrCore.java:796)
 
  : ... 8 more
 
  : Caused by: java.lang.IllegalAccessException: Class
 
  org.apache.solr.core.SolrResourceLoader can not access a member of
 class
 
  org.apache.lucene.util.LuceneTestCase$3 with modifiers 
 
  : at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)
 
  : at java.lang.Class.newInstance(Class.java:368)
 
  : at
 
  org.apache.solr.core.SolrResourceLoader.newInstance(
 SolrResourceLoader.java:529)
 
  : ... 15 more
 
 
  :[junit4]   2 NOTE: reproduce with: ant test  -Dtestcase=SampleTest
 
  -Dtests.method=testSimple -Dtests.seed=2E6E8F9ADADFEACF
 -Dtests.multiplier=2
 
  -Dtests.slow=true -Dtests.locale=ja_JP_JP_#u-ca-japanese
 
  

[jira] [Commented] (LUCENE-6065) remove foreign readers from merge, fix LeafReader instead.

2014-11-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219513#comment-14219513
 ] 

Uwe Schindler commented on LUCENE-6065:
---

I agree. Actually you wrap something different than those readers. So maybe 
have some other class that you have on the lower level during merging. One 
class the holds all those (FooReader implementations on the index view). On the 
searching side LeafReader is a basic interface without any implementation. So 
maybe let it be a real java interface implemented by the codec (SegmentReader). 
But you never pass LeafReader to the merging api. But making everything that 
is the real LeafReader interface be a final implementation detail is just wrong.

So just have a different type of API behind the scenes when merging that you 
can wrap. And keep LeafReader completely out when merging, just wrap something 
different.

 remove foreign readers from merge, fix LeafReader instead.
 

 Key: LUCENE-6065
 URL: https://issues.apache.org/jira/browse/LUCENE-6065
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6065.patch


 Currently, SegmentMerger has supported two classes of citizens being merged:
 # SegmentReader
 # foreign reader (e.g. some FilterReader)
 It does an instanceof check and executes the merge differently. In the 
 SegmentReader case: stored field and term vectors are bulk-merged, norms and 
 docvalues are transferred directly without piling up on the heap, CRC32 
 verification runs with IO locality of the data being merged, etc. Otherwise, 
 we treat it as a foreign reader and its slow.
 This is just the low-level, it gets worse as you wrap with more stuff. A 
 great example there is SortingMergePolicy: not only will it have the 
 low-level slowdowns listed above, it will e.g. cache/pile up OrdinalMaps for 
 all string docvalues fields being merged and other silliness that just makes 
 matters worse.
 Another use case is 5.0 users wishing to upgrade from fieldcache to 
 docvalues. This should be possible to implement with a simple incremental 
 transition based on a mergepolicy that uses UninvertingReader. But we 
 shouldnt populate internal fieldcache entries unnecessarily on merge and 
 spike RAM until all those segment cores are released, and other issues like 
 bulk merge of stored fields and not piling up norms should still work: its 
 completely unrelated.
 There are more problems we can fix if we clean this up, 
 checkindex/checkreader can run efficiently where it doesn't need to RAM spike 
 like merging, we can remove the checkIntegrity() method completely from 
 LeafReader, since it can always be accomplished on producers, etc. In general 
 it would be nice to just have one codepath for merging that is as efficient 
 as we can make it, and to support things like index modifications during 
 merge.
 I spent a few weeks writing 3 different implementations to fix this 
 (interface, optional abstract class, fix LeafReader), and the latter is the 
 only one i don't completely hate: I think our APIs should be efficient for 
 indexing as well as search.
 So the proposal is simple, its to instead refactor LeafReader to just require 
 the producer APIs as abstract methods (and FilterReaders should work on 
 that). The search-oriented APIs can just be final methods that defer to those.
 So we would add 5 abstract methods, but implement 10 current methods as final 
 based on those, and then merging would always be efficient.
 {code}
   // new abstract codec-based apis
   /** 
* Expert: retrieve thread-private TermVectorsReader
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract TermVectorsReader getTermVectorsReader();
   /** 
* Expert: retrieve thread-private StoredFieldsReader
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract StoredFieldsReader getFieldsReader();
   
   /** 
* Expert: retrieve underlying NormsProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract NormsProducer getNormsReader();
   
   /** 
* Expert: retrieve underlying DocValuesProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract DocValuesProducer getDocValuesReader();
   
   /** 
* Expert: retrieve underlying FieldsProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal  
*/
   protected abstract FieldsProducer getPostingsReader();
   // user/search oriented public apis based on the above
   public final Fields fields();
   public final void document(int, StoredFieldVisitor);
   public final 

[jira] [Commented] (SOLR-6752) Buffer Cache allocate/lost metrics should be exposed

2014-11-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219515#comment-14219515
 ] 

Mark Miller commented on SOLR-6752:
---

bq. Where should I look to make sure this is getting registered?

I fired up Solr on HDFS with JMX enabled and took a look at the exported mbeans 
with JConsole. I did not see anything for the block cache.

I'd look at how SolrResourceLoader adds the plugins that it loads to the 
JmxMonitoredMap.

 Buffer Cache allocate/lost metrics should be exposed
 

 Key: SOLR-6752
 URL: https://issues.apache.org/jira/browse/SOLR-6752
 Project: Solr
  Issue Type: Bug
Reporter: Mike Drob
Assignee: Mark Miller
  Labels: metrics
 Attachments: SOLR-6752.patch, SOLR-6752.patch


 Currently, {{o.a.s.store.blockcache.Metrics}} has fields for tracking buffer 
 allocations and losses, but they are never updated nor exposed to a receiving 
 metrics system. We should do both. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6065) remove foreign readers from merge, fix LeafReader instead.

2014-11-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219519#comment-14219519
 ] 

Uwe Schindler commented on LUCENE-6065:
---

In addition, in Java 8 we have interfaces that can be extended with default 
methods. Thats exactly what we have here somehow...

 remove foreign readers from merge, fix LeafReader instead.
 

 Key: LUCENE-6065
 URL: https://issues.apache.org/jira/browse/LUCENE-6065
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6065.patch


 Currently, SegmentMerger has supported two classes of citizens being merged:
 # SegmentReader
 # foreign reader (e.g. some FilterReader)
 It does an instanceof check and executes the merge differently. In the 
 SegmentReader case: stored field and term vectors are bulk-merged, norms and 
 docvalues are transferred directly without piling up on the heap, CRC32 
 verification runs with IO locality of the data being merged, etc. Otherwise, 
 we treat it as a foreign reader and its slow.
 This is just the low-level, it gets worse as you wrap with more stuff. A 
 great example there is SortingMergePolicy: not only will it have the 
 low-level slowdowns listed above, it will e.g. cache/pile up OrdinalMaps for 
 all string docvalues fields being merged and other silliness that just makes 
 matters worse.
 Another use case is 5.0 users wishing to upgrade from fieldcache to 
 docvalues. This should be possible to implement with a simple incremental 
 transition based on a mergepolicy that uses UninvertingReader. But we 
 shouldnt populate internal fieldcache entries unnecessarily on merge and 
 spike RAM until all those segment cores are released, and other issues like 
 bulk merge of stored fields and not piling up norms should still work: its 
 completely unrelated.
 There are more problems we can fix if we clean this up, 
 checkindex/checkreader can run efficiently where it doesn't need to RAM spike 
 like merging, we can remove the checkIntegrity() method completely from 
 LeafReader, since it can always be accomplished on producers, etc. In general 
 it would be nice to just have one codepath for merging that is as efficient 
 as we can make it, and to support things like index modifications during 
 merge.
 I spent a few weeks writing 3 different implementations to fix this 
 (interface, optional abstract class, fix LeafReader), and the latter is the 
 only one i don't completely hate: I think our APIs should be efficient for 
 indexing as well as search.
 So the proposal is simple, its to instead refactor LeafReader to just require 
 the producer APIs as abstract methods (and FilterReaders should work on 
 that). The search-oriented APIs can just be final methods that defer to those.
 So we would add 5 abstract methods, but implement 10 current methods as final 
 based on those, and then merging would always be efficient.
 {code}
   // new abstract codec-based apis
   /** 
* Expert: retrieve thread-private TermVectorsReader
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract TermVectorsReader getTermVectorsReader();
   /** 
* Expert: retrieve thread-private StoredFieldsReader
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract StoredFieldsReader getFieldsReader();
   
   /** 
* Expert: retrieve underlying NormsProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract NormsProducer getNormsReader();
   
   /** 
* Expert: retrieve underlying DocValuesProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract DocValuesProducer getDocValuesReader();
   
   /** 
* Expert: retrieve underlying FieldsProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal  
*/
   protected abstract FieldsProducer getPostingsReader();
   // user/search oriented public apis based on the above
   public final Fields fields();
   public final void document(int, StoredFieldVisitor);
   public final Fields getTermVectors(int);
   public final NumericDocValues getNumericDocValues(String);
   public final Bits getDocsWithField(String);
   public final BinaryDocValues getBinaryDocValues(String);
   public final SortedDocValues getSortedDocValues(String);
   public final SortedNumericDocValues getSortedNumericDocValues(String);
   public final SortedSetDocValues getSortedSetDocValues(String);
   public final NumericDocValues getNormValues(String);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To 

[jira] [Commented] (LUCENE-6065) remove foreign readers from merge, fix LeafReader instead.

2014-11-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219528#comment-14219528
 ] 

Uwe Schindler commented on LUCENE-6065:
---

Maybe i was a little bit too complicated in my explanation, sorry. The main 
problem I have is: _a public search API where all public methods are final and 
the whole implementation is protected_

 remove foreign readers from merge, fix LeafReader instead.
 

 Key: LUCENE-6065
 URL: https://issues.apache.org/jira/browse/LUCENE-6065
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6065.patch


 Currently, SegmentMerger has supported two classes of citizens being merged:
 # SegmentReader
 # foreign reader (e.g. some FilterReader)
 It does an instanceof check and executes the merge differently. In the 
 SegmentReader case: stored field and term vectors are bulk-merged, norms and 
 docvalues are transferred directly without piling up on the heap, CRC32 
 verification runs with IO locality of the data being merged, etc. Otherwise, 
 we treat it as a foreign reader and its slow.
 This is just the low-level, it gets worse as you wrap with more stuff. A 
 great example there is SortingMergePolicy: not only will it have the 
 low-level slowdowns listed above, it will e.g. cache/pile up OrdinalMaps for 
 all string docvalues fields being merged and other silliness that just makes 
 matters worse.
 Another use case is 5.0 users wishing to upgrade from fieldcache to 
 docvalues. This should be possible to implement with a simple incremental 
 transition based on a mergepolicy that uses UninvertingReader. But we 
 shouldnt populate internal fieldcache entries unnecessarily on merge and 
 spike RAM until all those segment cores are released, and other issues like 
 bulk merge of stored fields and not piling up norms should still work: its 
 completely unrelated.
 There are more problems we can fix if we clean this up, 
 checkindex/checkreader can run efficiently where it doesn't need to RAM spike 
 like merging, we can remove the checkIntegrity() method completely from 
 LeafReader, since it can always be accomplished on producers, etc. In general 
 it would be nice to just have one codepath for merging that is as efficient 
 as we can make it, and to support things like index modifications during 
 merge.
 I spent a few weeks writing 3 different implementations to fix this 
 (interface, optional abstract class, fix LeafReader), and the latter is the 
 only one i don't completely hate: I think our APIs should be efficient for 
 indexing as well as search.
 So the proposal is simple, its to instead refactor LeafReader to just require 
 the producer APIs as abstract methods (and FilterReaders should work on 
 that). The search-oriented APIs can just be final methods that defer to those.
 So we would add 5 abstract methods, but implement 10 current methods as final 
 based on those, and then merging would always be efficient.
 {code}
   // new abstract codec-based apis
   /** 
* Expert: retrieve thread-private TermVectorsReader
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract TermVectorsReader getTermVectorsReader();
   /** 
* Expert: retrieve thread-private StoredFieldsReader
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract StoredFieldsReader getFieldsReader();
   
   /** 
* Expert: retrieve underlying NormsProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract NormsProducer getNormsReader();
   
   /** 
* Expert: retrieve underlying DocValuesProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract DocValuesProducer getDocValuesReader();
   
   /** 
* Expert: retrieve underlying FieldsProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal  
*/
   protected abstract FieldsProducer getPostingsReader();
   // user/search oriented public apis based on the above
   public final Fields fields();
   public final void document(int, StoredFieldVisitor);
   public final Fields getTermVectors(int);
   public final NumericDocValues getNumericDocValues(String);
   public final Bits getDocsWithField(String);
   public final BinaryDocValues getBinaryDocValues(String);
   public final SortedDocValues getSortedDocValues(String);
   public final SortedNumericDocValues getSortedNumericDocValues(String);
   public final SortedSetDocValues getSortedSetDocValues(String);
   public final NumericDocValues getNormValues(String);
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (LUCENE-6065) remove foreign readers from merge, fix LeafReader instead.

2014-11-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219528#comment-14219528
 ] 

Uwe Schindler edited comment on LUCENE-6065 at 11/20/14 4:11 PM:
-

Maybe i was a little bit too complicated in my explanation, sorry. The main 
problem I have is: _a public search API where all public methods are final and 
the whole implementation is protected_, which is a horror when it comes to 
delegation pattern used by a Filtering API. This feels like Analyzer, which is 
unintuitive ([~mikemccand] also explained it with the complexity in analysis in 
his post on the mailing list to make a better lucene)  :-)


was (Author: thetaphi):
Maybe i was a little bit too complicated in my explanation, sorry. The main 
problem I have is: _a public search API where all public methods are final and 
the whole implementation is protected_

 remove foreign readers from merge, fix LeafReader instead.
 

 Key: LUCENE-6065
 URL: https://issues.apache.org/jira/browse/LUCENE-6065
 Project: Lucene - Core
  Issue Type: Task
Reporter: Robert Muir
 Attachments: LUCENE-6065.patch


 Currently, SegmentMerger has supported two classes of citizens being merged:
 # SegmentReader
 # foreign reader (e.g. some FilterReader)
 It does an instanceof check and executes the merge differently. In the 
 SegmentReader case: stored field and term vectors are bulk-merged, norms and 
 docvalues are transferred directly without piling up on the heap, CRC32 
 verification runs with IO locality of the data being merged, etc. Otherwise, 
 we treat it as a foreign reader and its slow.
 This is just the low-level, it gets worse as you wrap with more stuff. A 
 great example there is SortingMergePolicy: not only will it have the 
 low-level slowdowns listed above, it will e.g. cache/pile up OrdinalMaps for 
 all string docvalues fields being merged and other silliness that just makes 
 matters worse.
 Another use case is 5.0 users wishing to upgrade from fieldcache to 
 docvalues. This should be possible to implement with a simple incremental 
 transition based on a mergepolicy that uses UninvertingReader. But we 
 shouldnt populate internal fieldcache entries unnecessarily on merge and 
 spike RAM until all those segment cores are released, and other issues like 
 bulk merge of stored fields and not piling up norms should still work: its 
 completely unrelated.
 There are more problems we can fix if we clean this up, 
 checkindex/checkreader can run efficiently where it doesn't need to RAM spike 
 like merging, we can remove the checkIntegrity() method completely from 
 LeafReader, since it can always be accomplished on producers, etc. In general 
 it would be nice to just have one codepath for merging that is as efficient 
 as we can make it, and to support things like index modifications during 
 merge.
 I spent a few weeks writing 3 different implementations to fix this 
 (interface, optional abstract class, fix LeafReader), and the latter is the 
 only one i don't completely hate: I think our APIs should be efficient for 
 indexing as well as search.
 So the proposal is simple, its to instead refactor LeafReader to just require 
 the producer APIs as abstract methods (and FilterReaders should work on 
 that). The search-oriented APIs can just be final methods that defer to those.
 So we would add 5 abstract methods, but implement 10 current methods as final 
 based on those, and then merging would always be efficient.
 {code}
   // new abstract codec-based apis
   /** 
* Expert: retrieve thread-private TermVectorsReader
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract TermVectorsReader getTermVectorsReader();
   /** 
* Expert: retrieve thread-private StoredFieldsReader
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract StoredFieldsReader getFieldsReader();
   
   /** 
* Expert: retrieve underlying NormsProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract NormsProducer getNormsReader();
   
   /** 
* Expert: retrieve underlying DocValuesProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal 
*/
   protected abstract DocValuesProducer getDocValuesReader();
   
   /** 
* Expert: retrieve underlying FieldsProducer
* @throws AlreadyClosedException if this reader is closed
* @lucene.internal  
*/
   protected abstract FieldsProducer getPostingsReader();
   // user/search oriented public apis based on the above
   public final Fields fields();
   public final void document(int, StoredFieldVisitor);
   public final 

[jira] [Commented] (SOLR-6763) Shard leader election thread can persist across connection loss

2014-11-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219535#comment-14219535
 ] 

Mark Miller commented on SOLR-6763:
---

bq. and another spawned by the ReconnectStrategy. 

Hmm...this sounds fishy. We should not be spawning any new election thread on 
ConnectionLoss - only on Expiration.

 Shard leader election thread can persist across connection loss
 ---

 Key: SOLR-6763
 URL: https://issues.apache.org/jira/browse/SOLR-6763
 Project: Solr
  Issue Type: Bug
Reporter: Alan Woodward
 Attachments: SOLR-6763.patch


 A ZK connection loss during a call to 
 ElectionContext.waitForReplicasToComeUp() will result in two leader election 
 processes for the shard running within a single node - the initial election 
 that was waiting, and another spawned by the ReconnectStrategy.  After the 
 function returns, the first election will create an ephemeral leader node.  
 The second election will then also attempt to create this node, fail, and try 
 to put itself into recovery.  It will also set the 'isLeader' value in its 
 CloudDescriptor to false.
 The first election, meanwhile, is happily maintaining the ephemeral leader 
 node.  But any updates that are sent to the shard will cause an exception due 
 to the mismatch between the cloudstate (where this node is the leader) and 
 the local CloudDescriptor leader state.
 I think the fix is straightfoward - the call to zkClient.getChildren() in 
 waitForReplicasToComeUp should be called with 'retryOnReconnect=false', 
 rather than 'true' as it is currently, because once the connection has 
 dropped we're going to launch a new election process anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6763) Shard leader election thread can persist across connection loss

2014-11-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219537#comment-14219537
 ] 

Mark Miller commented on SOLR-6763:
---

Which version did you see this on by the way?

 Shard leader election thread can persist across connection loss
 ---

 Key: SOLR-6763
 URL: https://issues.apache.org/jira/browse/SOLR-6763
 Project: Solr
  Issue Type: Bug
Reporter: Alan Woodward
 Attachments: SOLR-6763.patch


 A ZK connection loss during a call to 
 ElectionContext.waitForReplicasToComeUp() will result in two leader election 
 processes for the shard running within a single node - the initial election 
 that was waiting, and another spawned by the ReconnectStrategy.  After the 
 function returns, the first election will create an ephemeral leader node.  
 The second election will then also attempt to create this node, fail, and try 
 to put itself into recovery.  It will also set the 'isLeader' value in its 
 CloudDescriptor to false.
 The first election, meanwhile, is happily maintaining the ephemeral leader 
 node.  But any updates that are sent to the shard will cause an exception due 
 to the mismatch between the cloudstate (where this node is the leader) and 
 the local CloudDescriptor leader state.
 I think the fix is straightfoward - the call to zkClient.getChildren() in 
 waitForReplicasToComeUp should be called with 'retryOnReconnect=false', 
 rather than 'true' as it is currently, because once the connection has 
 dropped we're going to launch a new election process anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/ibm-j9-jdk7) - Build # 11641 - Still Failing!

2014-11-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11641/
Java: 32bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}
 (asserts: true)

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrServerTest

Error Message:
ERROR: SolrZkClient opens=23 closes=22

Stack Trace:
java.lang.AssertionError: ERROR: SolrZkClient opens=23 closes=22
at __randomizedtesting.SeedInfo.seed([C9DD4C29ADB1A9E6]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingZkClients(SolrTestCaseJ4.java:461)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:188)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:853)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrServerTest

Error Message:
8 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest: 1) Thread[id=226, 
name=zkCallback-30-thread-4, state=TIMED_WAITING, 
group=TGRP-CloudSolrServerTest] at sun.misc.Unsafe.park(Native Method)  
   at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:237) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:471)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:370)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:953) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1099)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1161) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641) 
at java.lang.Thread.run(Thread.java:853)2) Thread[id=185, 
name=TEST-CloudSolrServerTest.testDistribSearch-seed#[C9DD4C29ADB1A9E6]-EventThread,
 state=WAITING, group=TGRP-CloudSolrServerTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:197) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2054)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:453) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:494)
3) Thread[id=225, name=zkCallback-30-thread-3, state=TIMED_WAITING, 
group=TGRP-CloudSolrServerTest] at sun.misc.Unsafe.park(Native Method)  
   at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:237) 
at 

[jira] [Commented] (SOLR-6655) Improve SimplePostTool to easily specify target port/collection etc.

2014-11-20 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219551#comment-14219551
 ] 

Jan Høydahl commented on SOLR-6655:
---

Yes, feel free to open a new JIRA for a full-fledged production-ready feeder 
client with propoer SolrJ and other dependencies...

 Improve SimplePostTool to easily specify target port/collection etc.
 

 Key: SOLR-6655
 URL: https://issues.apache.org/jira/browse/SOLR-6655
 Project: Solr
  Issue Type: Improvement
Reporter: Anshum Gupta
Assignee: Erik Hatcher
  Labels: difficulty-easy, impact-medium
 Fix For: 5.0, Trunk

 Attachments: SOLR-6655.patch


 Right now, the SimplePostTool has a single parameter 'url' that can be used 
 to send the request to a specific endpoint. It would make sense to allow 
 users to specify just the collection name, port etc. explicitly and 
 independently as separate parameters.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6066) New remove method in PriorityQueue

2014-11-20 Thread Mark Harwood (JIRA)
Mark Harwood created LUCENE-6066:


 Summary: New remove method in PriorityQueue
 Key: LUCENE-6066
 URL: https://issues.apache.org/jira/browse/LUCENE-6066
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/query/scoring
Reporter: Mark Harwood
Priority: Minor
 Fix For: 5.0


It would be useful to be able to remove existing elements from a PriorityQueue. 
The proposal is that a linear scan is performed to find the element being 
removed and then the end element in heap[size] is swapped into this position to 
perform the delete. The method downHeap() is then called to shuffle the 
replacement element back down the array but the existing downHeap method must 
be modified to allow picking up an entry from any point in the array rather 
than always assuming the first element (which is its only current mode of 
operation).

A working javascript model of the proposal with animation is available here: 
http://jsfiddle.net/grcmquf2/22/ 

In tests the modified version of downHeap produces the same results as the 
existing impl but adds the ability to push down from any point.

An example use case that requires remove is where a client doesn't want more 
than N matches for any given key (e.g. no more than 5 products from any one 
retailer in a marketplace). In these circumstances a document that was 
previously thought of as competitive has to be removed from the final PQ and 
replaced with another doc (eg a retailer who already has 5 matches in the PQ 
receives a 6th match which is better than his previous ones). This particular 
process is managed by a special DiversifyingPriorityQueue which wraps the 
main PriorityQueue and could be contributed as part of another issue if there 
is interest in that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6066) New remove method in PriorityQueue

2014-11-20 Thread Mark Harwood (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Harwood updated LUCENE-6066:
-
Attachment: LUCENE-PQRemoveV1.patch

New remove(element) method in PriorityQueue and related test

 New remove method in PriorityQueue
 

 Key: LUCENE-6066
 URL: https://issues.apache.org/jira/browse/LUCENE-6066
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/query/scoring
Reporter: Mark Harwood
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-PQRemoveV1.patch


 It would be useful to be able to remove existing elements from a 
 PriorityQueue. 
 The proposal is that a linear scan is performed to find the element being 
 removed and then the end element in heap[size] is swapped into this position 
 to perform the delete. The method downHeap() is then called to shuffle the 
 replacement element back down the array but the existing downHeap method must 
 be modified to allow picking up an entry from any point in the array rather 
 than always assuming the first element (which is its only current mode of 
 operation).
 A working javascript model of the proposal with animation is available here: 
 http://jsfiddle.net/grcmquf2/22/ 
 In tests the modified version of downHeap produces the same results as the 
 existing impl but adds the ability to push down from any point.
 An example use case that requires remove is where a client doesn't want more 
 than N matches for any given key (e.g. no more than 5 products from any one 
 retailer in a marketplace). In these circumstances a document that was 
 previously thought of as competitive has to be removed from the final PQ and 
 replaced with another doc (eg a retailer who already has 5 matches in the PQ 
 receives a 6th match which is better than his previous ones). This particular 
 process is managed by a special DiversifyingPriorityQueue which wraps the 
 main PriorityQueue and could be contributed as part of another issue if there 
 is interest in that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2215 - Still Failing

2014-11-20 Thread Michael McCandless
Hmm, I'm leery of giving this class a name: I don't want to call
attention to the fact that you can turn off CMS's throttling, even
inside Lucene's test-framework.


Mike McCandless

http://blog.mikemccandless.com


On Thu, Nov 20, 2014 at 10:51 AM, Mark Miller markrmil...@gmail.com wrote:
 If it was just to work around Solr, I think the fix should be in Solr. But
 we ship the Lucene test framework module, and Solr is not doing anything too
 crazy here at all. So it makes more sense to me to make the Lucene test
 module more friendly and consumable rather than doing a hack in Solr.

 - Mark

 On Thu Nov 20 2014 at 10:02:42 AM Alan Woodward a...@flax.co.uk wrote:

 It's a hack, true.  I thought about creating the public class in
 LuceneTestCase, but it seemed weird to be changing the lucene functionality
 to work around an issue in the way Solr instantiates things.  But you're
 right, this does mean that we lose a bit of test coverage in Solr, so maybe
 your suggestion is better.

 Alan Woodward
 www.flax.co.uk


 On 20 Nov 2014, at 14:38, Mark Miller wrote:

 This kind of sucks though right? What if we changed it from an anon class
 in Lucene instead and then wouldn't it work in more cases and we don't lose
 this new test functionality as a Lucene test module consumer?

 eg

   public static final class DoesNotStallConcurrentMergeScheduler extends
   ConcurrentMergeScheduler {
 @Override
 protected synchronized void maybeStall() {
 }
   }

 Mark

 On Thu Nov 20 2014 at 5:17:31 AM Michael McCandless
 luc...@mikemccandless.com wrote:

 Thanks Alan!

 Mike McCandless

 http://blog.mikemccandless.com


 On Thu, Nov 20, 2014 at 5:12 AM, Alan Woodward a...@flax.co.uk wrote:
  I committed a fix.  There's now a check in newRandomConfig() to see if
  there's a $ in the merge scheduler class name, and if there is it
  just
  uses CMS instead.
 
  Alan Woodward
  www.flax.co.uk
 
 
  On 19 Nov 2014, at 19:07, Alan Woodward wrote:
 
  So digging in…  Solr instantiates the merge scheduler via it's
  ResourceLoader, which takes a class name.  The random indexconfig
  snippet
  sets the classname to whatever the value of
  ${solr.tests.mergeScheduler} is.
  This is set in SolrTestCaseJ4.newRandomConfig():
 
  System.setProperty(solr.tests.mergeScheduler,
  iwc.getMergeScheduler().getClass().getName());
 
  And I guess you can't call Class.newInstance() on an anonymous class?
 
  Alan Woodward
  www.flax.co.uk
 
 
  On 19 Nov 2014, at 18:10, Michael McCandless wrote:
 
  Oh, I also saw this before committing, was confused, ran ant clean
 
  test in solr directory, and it passed, so I thought ant clean fixed
 
  it ... I guess not.
 
 
  With this change, in LuceneTestCase's newIndexWriterConfig, I
 
  sometimes randomly subclass ConcurrentMergeScheduler (to turn off
 
  merge throttling) in the random IWC that's returned.  Does this make
 
  Solr unhappy?  Why is Solr trying to instantiate the merge scheduler
 
  class that's already instantiated on IWC?  I'm confused...
 
 
  Mike McCandless
 
 
  http://blog.mikemccandless.com
 
 
 
  On Wed, Nov 19, 2014 at 1:00 PM, Alan Woodward a...@flax.co.uk wrote:
 
  I think this might be to do with Mike's changes in r1640457, but for
  some
 
  reason I can't up from svn or the apache git repo at the moment so I'm
  not
 
  certain.
 
 
  Alan Woodward
 
  www.flax.co.uk
 
 
 
  On 19 Nov 2014, at 17:05, Chris Hostetter wrote:
 
 
 
  Apologies -- I haven't been following the commits closely this week.
 
 
  Does anyone have any idea what changed at the low levels of the Solr
 
  testing class hierarchy to cause these failures in a variety of tests?
 
 
  : SolrCore 'collection1' is not available due to init failure: Error
 
  : instantiating class: 'org.apache.lucene.util.LuceneTestCase$3'
 
 
  : Caused by: org.apache.solr.common.SolrException: Error instantiating
 
  class: 'org.apache.lucene.util.LuceneTestCase$3'
 
  : at
 
 
  org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:532)
 
  : at
 
 
  org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:517)
 
  : at
 
 
  org.apache.solr.update.SolrIndexConfig.buildMergeScheduler(SolrIndexConfig.java:289)
 
  : at
 
 
  org.apache.solr.update.SolrIndexConfig.toIndexWriterConfig(SolrIndexConfig.java:214)
 
  : at
  org.apache.solr.update.SolrIndexWriter.init(SolrIndexWriter.java:77)
 
  : at
  org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:64)
 
  : at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:529)
 
  : at org.apache.solr.core.SolrCore.init(SolrCore.java:796)
 
  : ... 8 more
 
  : Caused by: java.lang.IllegalAccessException: Class
 
  org.apache.solr.core.SolrResourceLoader can not access a member of
  class
 
  org.apache.lucene.util.LuceneTestCase$3 with modifiers 
 
  : at sun.reflect.Reflection.ensureMemberAccess(Reflection.java:109)
 
  : at java.lang.Class.newInstance(Class.java:368)
 
  : at
 
 
  

[jira] [Commented] (LUCENE-6066) New remove method in PriorityQueue

2014-11-20 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219620#comment-14219620
 ] 

Adrien Grand commented on LUCENE-6066:
--

Ensuring diversity of the search results is a requirement I have often heard, 
so I think it would be nice to see what we can do. However I'm not sure that 
adding a linear-time removal method to PriorityQueue is the right thing to do, 
maybe we need a different data-structure that would make removal less costly?

 New remove method in PriorityQueue
 

 Key: LUCENE-6066
 URL: https://issues.apache.org/jira/browse/LUCENE-6066
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/query/scoring
Reporter: Mark Harwood
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-PQRemoveV1.patch


 It would be useful to be able to remove existing elements from a 
 PriorityQueue. 
 The proposal is that a linear scan is performed to find the element being 
 removed and then the end element in heap[size] is swapped into this position 
 to perform the delete. The method downHeap() is then called to shuffle the 
 replacement element back down the array but the existing downHeap method must 
 be modified to allow picking up an entry from any point in the array rather 
 than always assuming the first element (which is its only current mode of 
 operation).
 A working javascript model of the proposal with animation is available here: 
 http://jsfiddle.net/grcmquf2/22/ 
 In tests the modified version of downHeap produces the same results as the 
 existing impl but adds the ability to push down from any point.
 An example use case that requires remove is where a client doesn't want more 
 than N matches for any given key (e.g. no more than 5 products from any one 
 retailer in a marketplace). In these circumstances a document that was 
 previously thought of as competitive has to be removed from the final PQ and 
 replaced with another doc (eg a retailer who already has 5 matches in the PQ 
 receives a 6th match which is better than his previous ones). This particular 
 process is managed by a special DiversifyingPriorityQueue which wraps the 
 main PriorityQueue and could be contributed as part of another issue if there 
 is interest in that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6766) Switch o.a.s.store.blockcache.Metrics to use JMX

2014-11-20 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219623#comment-14219623
 ] 

Mike Drob commented on SOLR-6766:
-

[~markrmil...@gmail.com] - Continuing discussion from SOLR-6752...

{quote}
I fired up Solr on HDFS with JMX enabled and took a look at the exported mbeans 
with JConsole. I did not see anything for the block cache.

I'd look at how SolrResourceLoader adds the plugins that it loads to the 
JmxMonitoredMap.
{quote}
Been digging deeper into this... metrics are tracked on a per-core basis. Each 
core has an {{infoRegistry}} that is populated in the constructor either 
directly or from beans that the SolrResourceLoader had previously created. So 
instead of creating a new Metrics object directly, we will need to create one 
through the {{SolrResourceLoader.newInstance()}}, which is I think what you 
were suggesting.

The trick here is that we need to create the bean before the {{SolrCore}} 
finishes constructing, but after the {{HdfsDirectoryFactory}} (HDF) exists to 
make sure that it gets registered in time. So basically, in the no-arg HDF 
constructor is our only option. The problem is that HDF (or any implementation 
of {{DirectoryFactory}}) is not aware of the resource loader or even a 
{{SolrConfig}} to be able to acquire a reference to the resource loader. I'm 
hesitant to add a {{setResourceLoader}} method or similar on 
{{DirectoryFactory}} because that is starting to feel very intrusive, but I 
also don't see another way to plumb this through.

 Switch o.a.s.store.blockcache.Metrics to use JMX
 

 Key: SOLR-6766
 URL: https://issues.apache.org/jira/browse/SOLR-6766
 Project: Solr
  Issue Type: Bug
Reporter: Mike Drob
  Labels: metrics

 The Metrics class currently reports to hadoop metrics, but it would be better 
 to report to JMX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6770) Add/edit param sets and use them in Requests

2014-11-20 Thread Noble Paul (JIRA)
Noble Paul created SOLR-6770:


 Summary: Add/edit param sets and use them in Requests
 Key: SOLR-6770
 URL: https://issues.apache.org/jira/browse/SOLR-6770
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul


Make it possible to define paramsets and use them directly in requests

example
{code}
curl http://localhost:8983/solr/collection1/config -H 
'Content-type:application/json'  -d '{
create-paramset : {name ,x
  val: {
  a:A val,
  b: B val}
},
update-paramset : {name ,y
  val: {
  x:X val,
  Y: Y val}
},
delete-paramset : z
}'
{code}

This data will be stored in conf/paramsets.json

example usage http://localhost/solr/collection/select?paramset=x




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6607) Registering pluggable components through API

2014-11-20 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6607:
-
Description: 
The concept of solrconfig editing is split into multiple pieces . This issue is 
about registering components and uploading binaries through an API.

This supports multiple operations

 * Upload a jar file which can be used later in a plugin configuration. The jar 
file will be stored in a special collection called \_system_ or ( in  a core 
called \_system_ in a standalone solr) as a binary field .
 * command  'set-configuration'  which can set the configuration of a component 
. This configuration will be saved inside the configoverlay.json
* command  remove-configuration . which can remove a plugin configuration 
from the configoverlay.json and not from solrconfig.xml


The components can be registered from a jar file that is available in the 
classpath of all nodes. Registering of components from uploaded jars will only 
be possible if systems are started with an option -DloadRuntimeLibs (Please 
suggest a better name) . The objective is to be able to completely disable this 
feature by default and but can only be enabled by a user with file system 
access. Any system which can load remote libraries are a security hole and a 
lot of organizations would want to disable this 

example for registering a component
{code}
curl http://localhost:8983/solr/collection1/config -H  -d '{
create-request-handler : {name: /mypath , 
class=com.mycomponent.ClassName location=_system:mycomponent version=2, 
defaults:{x:y
a:b}
}'
{code}

loading the binary to solr 

{code}
curl http://localhost:8983/solr/_system/jar?name=mycomponent   --data-binary 
@myselfcontainigcomponent.jar 
{code}

  was:
The concept of solrconfig editing is split into multiple pieces . This issue is 
about registering components and uploading binaries through an API.

This supports multiple operations

 * Upload a jar file which can be used later in a plugin configuration. The jar 
file will be stored in a special collection called \_system_ or ( in  a core 
called \_system_ in a standalone solr) as a binary field .
 * command  'set-configuration'  which can set the configuration of a component 
. This configuration will be saved inside the configoverlay.json
* command  remove-configuration . which can remove a plugin configuration 
from the configoverlay.json and not from solrconfig.xml


The components can be registered from a jar file that is available in the 
classpath of all nodes. Registering of components from uploaded jars will only 
be possible if systems are started with an option -DloadRuntimeLibs (Please 
suggest a better name) . The objective is to be able to completely disable this 
feature by default and but can only be enabled by a user with file system 
access. Any system which can load remote libraries are a security hole and a 
lot of organizations would want to disable this 

example for registering a component
{code}
curl http://localhost:8983/solr/collection1/config -H  -d '{
create-request-handler : {name: /mypath , 
class=com.mycomponent.ClassName location=index:mycomponent version=2, 
defaults:{x:y
a:b}
}'
{code}

loading the binary to solr 

{code}
curl http://localhost:8983/solr/_system/jar?name=mycomponent   --data-binary 
@myselfcontainigcomponent.jar 
{code}


 Registering pluggable components through API
 

 Key: SOLR-6607
 URL: https://issues.apache.org/jira/browse/SOLR-6607
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul

 The concept of solrconfig editing is split into multiple pieces . This issue 
 is about registering components and uploading binaries through an API.
 This supports multiple operations
  * Upload a jar file which can be used later in a plugin configuration. The 
 jar file will be stored in a special collection called \_system_ or ( in  a 
 core called \_system_ in a standalone solr) as a binary field .
  * command  'set-configuration'  which can set the configuration of a 
 component . This configuration will be saved inside the configoverlay.json
 * command  remove-configuration . which can remove a plugin configuration 
 from the configoverlay.json and not from solrconfig.xml
 The components can be registered from a jar file that is available in the 
 classpath of all nodes. Registering of components from uploaded jars will 
 only be possible if systems are started with an option -DloadRuntimeLibs 
 (Please suggest a better name) . The objective is to be able to completely 
 disable this feature by default and but can only be enabled by a user with 
 file system access. Any system which can load remote libraries are a security 
 hole and a lot of organizations would want 

[jira] [Updated] (SOLR-6607) Registering pluggable components through API

2014-11-20 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6607:
-
Description: 
The concept of solrconfig editing is split into multiple pieces . This issue is 
about registering components and uploading binaries through an API.

This supports multiple operations

 * Upload a jar file which can be used later in a plugin configuration. The jar 
file will be stored in a special collection called \_system_ or ( in  a core 
called \_system_ in a standalone solr) as a binary field .
 * command  'set-configuration'  which can set the configuration of a component 
. This configuration will be saved inside the configoverlay.json
* command  remove-configuration . which can remove a plugin configuration 
from the configoverlay.json and not from solrconfig.xml


The components can be registered from a jar file that is available in the 
classpath of all nodes. Registering of components from uploaded jars will only 
be possible if systems are started with an option -DloadRuntimeLibs (Please 
suggest a better name) . The objective is to be able to completely disable this 
feature by default and but can only be enabled by a user with file system 
access. Any system which can load remote libraries are a security hole and a 
lot of organizations would want to disable this 

example for registering a component
{code}
curl http://localhost:8983/solr/collection1/config -H  -d '{
create-request-handler : {name: /mypath , 
class:com.mycomponent.ClassName ,location:_system:mycomponent 
version=2, defaults:{x:y
a:b}
}'
{code}

loading the binary to solr 

{code}
curl http://localhost:8983/solr/_system/jar?name=mycomponent   --data-binary 
@myselfcontainigcomponent.jar 
{code}

  was:
The concept of solrconfig editing is split into multiple pieces . This issue is 
about registering components and uploading binaries through an API.

This supports multiple operations

 * Upload a jar file which can be used later in a plugin configuration. The jar 
file will be stored in a special collection called \_system_ or ( in  a core 
called \_system_ in a standalone solr) as a binary field .
 * command  'set-configuration'  which can set the configuration of a component 
. This configuration will be saved inside the configoverlay.json
* command  remove-configuration . which can remove a plugin configuration 
from the configoverlay.json and not from solrconfig.xml


The components can be registered from a jar file that is available in the 
classpath of all nodes. Registering of components from uploaded jars will only 
be possible if systems are started with an option -DloadRuntimeLibs (Please 
suggest a better name) . The objective is to be able to completely disable this 
feature by default and but can only be enabled by a user with file system 
access. Any system which can load remote libraries are a security hole and a 
lot of organizations would want to disable this 

example for registering a component
{code}
curl http://localhost:8983/solr/collection1/config -H  -d '{
create-request-handler : {name: /mypath , 
class=com.mycomponent.ClassName location=_system:mycomponent version=2, 
defaults:{x:y
a:b}
}'
{code}

loading the binary to solr 

{code}
curl http://localhost:8983/solr/_system/jar?name=mycomponent   --data-binary 
@myselfcontainigcomponent.jar 
{code}


 Registering pluggable components through API
 

 Key: SOLR-6607
 URL: https://issues.apache.org/jira/browse/SOLR-6607
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul

 The concept of solrconfig editing is split into multiple pieces . This issue 
 is about registering components and uploading binaries through an API.
 This supports multiple operations
  * Upload a jar file which can be used later in a plugin configuration. The 
 jar file will be stored in a special collection called \_system_ or ( in  a 
 core called \_system_ in a standalone solr) as a binary field .
  * command  'set-configuration'  which can set the configuration of a 
 component . This configuration will be saved inside the configoverlay.json
 * command  remove-configuration . which can remove a plugin configuration 
 from the configoverlay.json and not from solrconfig.xml
 The components can be registered from a jar file that is available in the 
 classpath of all nodes. Registering of components from uploaded jars will 
 only be possible if systems are started with an option -DloadRuntimeLibs 
 (Please suggest a better name) . The objective is to be able to completely 
 disable this feature by default and but can only be enabled by a user with 
 file system access. Any system which can load remote libraries are a security 
 hole and a lot of organizations would 

[jira] [Updated] (SOLR-6607) Registering pluggable components through API

2014-11-20 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6607:
-
Description: 
The concept of solrconfig editing is split into multiple pieces . This issue is 
about registering components and uploading binaries through an API.

This supports multiple operations

 * Upload a jar file which can be used later in a plugin configuration. The jar 
file will be stored in a special collection called \_system_ or ( in  a core 
called \_system_ in a standalone solr) as a binary field .
 * commands  'create-requesthandle', 
update-request-handler,delete-requesthandler  which can set the 
configuration of a component . This configuration will be saved inside the 
configoverlay.json



The components can be registered from a jar file that is available in the 
classpath of all nodes. Registering of components from uploaded jars will only 
be possible if systems are started with an option -DloadRuntimeLibs (Please 
suggest a better name) . The objective is to be able to completely disable this 
feature by default and but can only be enabled by a user with file system 
access. Any system which can load remote libraries are a security hole and a 
lot of organizations would want to disable this 

example for registering a component
{code}
curl http://localhost:8983/solr/collection1/config -H  -d '{
create-request-handler : {name: /mypath , 
class:com.mycomponent.ClassName ,location:_system:mycomponent 
version=2, defaults:{x:y
a:b}
}'
{code}

loading the binary to solr 

{code}
curl http://localhost:8983/solr/_system/jar?name=mycomponent   --data-binary 
@myselfcontainigcomponent.jar 
{code}

  was:
The concept of solrconfig editing is split into multiple pieces . This issue is 
about registering components and uploading binaries through an API.

This supports multiple operations

 * Upload a jar file which can be used later in a plugin configuration. The jar 
file will be stored in a special collection called \_system_ or ( in  a core 
called \_system_ in a standalone solr) as a binary field .
 * command  'set-configuration'  which can set the configuration of a component 
. This configuration will be saved inside the configoverlay.json
* command  remove-configuration . which can remove a plugin configuration 
from the configoverlay.json and not from solrconfig.xml


The components can be registered from a jar file that is available in the 
classpath of all nodes. Registering of components from uploaded jars will only 
be possible if systems are started with an option -DloadRuntimeLibs (Please 
suggest a better name) . The objective is to be able to completely disable this 
feature by default and but can only be enabled by a user with file system 
access. Any system which can load remote libraries are a security hole and a 
lot of organizations would want to disable this 

example for registering a component
{code}
curl http://localhost:8983/solr/collection1/config -H  -d '{
create-request-handler : {name: /mypath , 
class:com.mycomponent.ClassName ,location:_system:mycomponent 
version=2, defaults:{x:y
a:b}
}'
{code}

loading the binary to solr 

{code}
curl http://localhost:8983/solr/_system/jar?name=mycomponent   --data-binary 
@myselfcontainigcomponent.jar 
{code}


 Registering pluggable components through API
 

 Key: SOLR-6607
 URL: https://issues.apache.org/jira/browse/SOLR-6607
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
Assignee: Noble Paul

 The concept of solrconfig editing is split into multiple pieces . This issue 
 is about registering components and uploading binaries through an API.
 This supports multiple operations
  * Upload a jar file which can be used later in a plugin configuration. The 
 jar file will be stored in a special collection called \_system_ or ( in  a 
 core called \_system_ in a standalone solr) as a binary field .
  * commands  'create-requesthandle', 
 update-request-handler,delete-requesthandler  which can set the 
 configuration of a component . This configuration will be saved inside the 
 configoverlay.json
 The components can be registered from a jar file that is available in the 
 classpath of all nodes. Registering of components from uploaded jars will 
 only be possible if systems are started with an option -DloadRuntimeLibs 
 (Please suggest a better name) . The objective is to be able to completely 
 disable this feature by default and but can only be enabled by a user with 
 file system access. Any system which can load remote libraries are a security 
 hole and a lot of organizations would want to disable this 
 example for registering a component
 {code}
 curl http://localhost:8983/solr/collection1/config -H  -d '{
 create-request-handler : 

[jira] [Commented] (LUCENE-6066) New remove method in PriorityQueue

2014-11-20 Thread Mark Harwood (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219651#comment-14219651
 ] 

Mark Harwood commented on LUCENE-6066:
--

If the PQ set the current array position as a property of each element every 
time it moved them around I could pass the array index to remove() rather than 
an object that had to be scanned for 

 New remove method in PriorityQueue
 

 Key: LUCENE-6066
 URL: https://issues.apache.org/jira/browse/LUCENE-6066
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/query/scoring
Reporter: Mark Harwood
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-PQRemoveV1.patch


 It would be useful to be able to remove existing elements from a 
 PriorityQueue. 
 The proposal is that a linear scan is performed to find the element being 
 removed and then the end element in heap[size] is swapped into this position 
 to perform the delete. The method downHeap() is then called to shuffle the 
 replacement element back down the array but the existing downHeap method must 
 be modified to allow picking up an entry from any point in the array rather 
 than always assuming the first element (which is its only current mode of 
 operation).
 A working javascript model of the proposal with animation is available here: 
 http://jsfiddle.net/grcmquf2/22/ 
 In tests the modified version of downHeap produces the same results as the 
 existing impl but adds the ability to push down from any point.
 An example use case that requires remove is where a client doesn't want more 
 than N matches for any given key (e.g. no more than 5 products from any one 
 retailer in a marketplace). In these circumstances a document that was 
 previously thought of as competitive has to be removed from the final PQ and 
 replaced with another doc (eg a retailer who already has 5 matches in the PQ 
 receives a 6th match which is better than his previous ones). This particular 
 process is managed by a special DiversifyingPriorityQueue which wraps the 
 main PriorityQueue and could be contributed as part of another issue if there 
 is interest in that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6763) Shard leader election thread can persist across connection loss

2014-11-20 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219673#comment-14219673
 ] 

Alan Woodward commented on SOLR-6763:
-

This is on 5.x.  And you're right, it was actually caused by session expiry, 
not connection loss (runaway query caused a massive GC pause).

 Shard leader election thread can persist across connection loss
 ---

 Key: SOLR-6763
 URL: https://issues.apache.org/jira/browse/SOLR-6763
 Project: Solr
  Issue Type: Bug
Reporter: Alan Woodward
 Attachments: SOLR-6763.patch


 A ZK connection loss during a call to 
 ElectionContext.waitForReplicasToComeUp() will result in two leader election 
 processes for the shard running within a single node - the initial election 
 that was waiting, and another spawned by the ReconnectStrategy.  After the 
 function returns, the first election will create an ephemeral leader node.  
 The second election will then also attempt to create this node, fail, and try 
 to put itself into recovery.  It will also set the 'isLeader' value in its 
 CloudDescriptor to false.
 The first election, meanwhile, is happily maintaining the ephemeral leader 
 node.  But any updates that are sent to the shard will cause an exception due 
 to the mismatch between the cloudstate (where this node is the leader) and 
 the local CloudDescriptor leader state.
 I think the fix is straightfoward - the call to zkClient.getChildren() in 
 waitForReplicasToComeUp should be called with 'retryOnReconnect=false', 
 rather than 'true' as it is currently, because once the connection has 
 dropped we're going to launch a new election process anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6763) Shard leader election thread can persist across connection loss

2014-11-20 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219737#comment-14219737
 ] 

Mark Miller commented on SOLR-6763:
---

Hmm...have to look closer then, but in that case the fix doesn't sound right.

 Shard leader election thread can persist across connection loss
 ---

 Key: SOLR-6763
 URL: https://issues.apache.org/jira/browse/SOLR-6763
 Project: Solr
  Issue Type: Bug
Reporter: Alan Woodward
 Attachments: SOLR-6763.patch


 A ZK connection loss during a call to 
 ElectionContext.waitForReplicasToComeUp() will result in two leader election 
 processes for the shard running within a single node - the initial election 
 that was waiting, and another spawned by the ReconnectStrategy.  After the 
 function returns, the first election will create an ephemeral leader node.  
 The second election will then also attempt to create this node, fail, and try 
 to put itself into recovery.  It will also set the 'isLeader' value in its 
 CloudDescriptor to false.
 The first election, meanwhile, is happily maintaining the ephemeral leader 
 node.  But any updates that are sent to the shard will cause an exception due 
 to the mismatch between the cloudstate (where this node is the leader) and 
 the local CloudDescriptor leader state.
 I think the fix is straightfoward - the call to zkClient.getChildren() in 
 waitForReplicasToComeUp should be called with 'retryOnReconnect=false', 
 rather than 'true' as it is currently, because once the connection has 
 dropped we're going to launch a new election process anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6066) New remove method in PriorityQueue

2014-11-20 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219753#comment-14219753
 ] 

Michael McCandless commented on LUCENE-6066:


I agree it would be nice to make diversity work well with Lucene.  Isn't it 
essentially the same thing as grouping, which holds top N hits, in second pass, 
for top M groups found in the first pass?


 New remove method in PriorityQueue
 

 Key: LUCENE-6066
 URL: https://issues.apache.org/jira/browse/LUCENE-6066
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/query/scoring
Reporter: Mark Harwood
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-PQRemoveV1.patch


 It would be useful to be able to remove existing elements from a 
 PriorityQueue. 
 The proposal is that a linear scan is performed to find the element being 
 removed and then the end element in heap[size] is swapped into this position 
 to perform the delete. The method downHeap() is then called to shuffle the 
 replacement element back down the array but the existing downHeap method must 
 be modified to allow picking up an entry from any point in the array rather 
 than always assuming the first element (which is its only current mode of 
 operation).
 A working javascript model of the proposal with animation is available here: 
 http://jsfiddle.net/grcmquf2/22/ 
 In tests the modified version of downHeap produces the same results as the 
 existing impl but adds the ability to push down from any point.
 An example use case that requires remove is where a client doesn't want more 
 than N matches for any given key (e.g. no more than 5 products from any one 
 retailer in a marketplace). In these circumstances a document that was 
 previously thought of as competitive has to be removed from the final PQ and 
 replaced with another doc (eg a retailer who already has 5 matches in the PQ 
 receives a 6th match which is better than his previous ones). This particular 
 process is managed by a special DiversifyingPriorityQueue which wraps the 
 main PriorityQueue and could be contributed as part of another issue if there 
 is interest in that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 688 - Still Failing

2014-11-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/688/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
java.lang.NullPointerException 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
java.lang.NullPointerException

at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 1906 - Still Failing!

2014-11-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1906/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseSerialGC (asserts: false)

4 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrServerTest.testDistribSearch

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([9CF95A9BF75333E1:1D1FD483800C53DD]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.allTests(CloudSolrServerTest.java:300)
at 
org.apache.solr.client.solrj.impl.CloudSolrServerTest.doTest(CloudSolrServerTest.java:124)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  

[jira] [Commented] (LUCENE-6066) New remove method in PriorityQueue

2014-11-20 Thread Mark Harwood (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219822#comment-14219822
 ] 

Mark Harwood commented on LUCENE-6066:
--

I guess it's different from grouping in that: 
1) it only involves one pass over the data
2) the client doesn't have to guess the number of groups he is going to need to 
get up-front
3) We don't get any filler docs in each group's results i.e. a bunch of 
irrelevant docs for an author with one good hit.

 New remove method in PriorityQueue
 

 Key: LUCENE-6066
 URL: https://issues.apache.org/jira/browse/LUCENE-6066
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/query/scoring
Reporter: Mark Harwood
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-PQRemoveV1.patch


 It would be useful to be able to remove existing elements from a 
 PriorityQueue. 
 The proposal is that a linear scan is performed to find the element being 
 removed and then the end element in heap[size] is swapped into this position 
 to perform the delete. The method downHeap() is then called to shuffle the 
 replacement element back down the array but the existing downHeap method must 
 be modified to allow picking up an entry from any point in the array rather 
 than always assuming the first element (which is its only current mode of 
 operation).
 A working javascript model of the proposal with animation is available here: 
 http://jsfiddle.net/grcmquf2/22/ 
 In tests the modified version of downHeap produces the same results as the 
 existing impl but adds the ability to push down from any point.
 An example use case that requires remove is where a client doesn't want more 
 than N matches for any given key (e.g. no more than 5 products from any one 
 retailer in a marketplace). In these circumstances a document that was 
 previously thought of as competitive has to be removed from the final PQ and 
 replaced with another doc (eg a retailer who already has 5 matches in the PQ 
 receives a 6th match which is better than his previous ones). This particular 
 process is managed by a special DiversifyingPriorityQueue which wraps the 
 main PriorityQueue and could be contributed as part of another issue if there 
 is interest in that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6763) Shard leader election thread can persist across connection loss

2014-11-20 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219843#comment-14219843
 ] 

Alan Woodward commented on SOLR-6763:
-

Yeah, I think the important thing to do here is to bail out on a 
SessionExpiredException.  So the added try-catch clause in the above patch will 
fix it, but we want to keep the getChildren() call with retryOnReconnect=true.

 Shard leader election thread can persist across connection loss
 ---

 Key: SOLR-6763
 URL: https://issues.apache.org/jira/browse/SOLR-6763
 Project: Solr
  Issue Type: Bug
Reporter: Alan Woodward
 Attachments: SOLR-6763.patch


 A ZK connection loss during a call to 
 ElectionContext.waitForReplicasToComeUp() will result in two leader election 
 processes for the shard running within a single node - the initial election 
 that was waiting, and another spawned by the ReconnectStrategy.  After the 
 function returns, the first election will create an ephemeral leader node.  
 The second election will then also attempt to create this node, fail, and try 
 to put itself into recovery.  It will also set the 'isLeader' value in its 
 CloudDescriptor to false.
 The first election, meanwhile, is happily maintaining the ephemeral leader 
 node.  But any updates that are sent to the shard will cause an exception due 
 to the mismatch between the cloudstate (where this node is the leader) and 
 the local CloudDescriptor leader state.
 I think the fix is straightfoward - the call to zkClient.getChildren() in 
 waitForReplicasToComeUp should be called with 'retryOnReconnect=false', 
 rather than 'true' as it is currently, because once the connection has 
 dropped we're going to launch a new election process anyway.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5833) Suggestor Version 2 doesn't support multiValued fields

2014-11-20 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219886#comment-14219886
 ] 

Varun Thacker commented on LUCENE-5833:
---

Hi [~mikemccand] ,

Can you please review the patch and let me know if anything else needs to be 
done

 Suggestor Version 2 doesn't support multiValued fields
 --

 Key: LUCENE-5833
 URL: https://issues.apache.org/jira/browse/LUCENE-5833
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.8.1
Reporter: Greg Harris
Assignee: Steve Rowe
 Attachments: LUCENE-5833.patch, LUCENE-5833.patch, LUCENE-5833.patch, 
 SOLR-6210.patch


 So if you use a multiValued field in the new suggestor it will not pick up 
 terms for any term after the first one. So it treats the first term as the 
 only term it will make it's dictionary from. 
 This is the suggestor I'm talking about:
 https://issues.apache.org/jira/browse/SOLR-5378



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6066) New remove method in PriorityQueue

2014-11-20 Thread Mark Harwood (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219901#comment-14219901
 ] 

Mark Harwood commented on LUCENE-6066:
--

An analogy might be making a compilation album of 1967's top hit records:

1) A vanilla Lucene query's results might look like a Best of the Beatles 
album - no diversity
2) A grouping query would produce The 10 top-selling artists of 1967 - some 
killer and quite a lot of filler
3) A diversified query would be the top 20 hit records of that year - with a 
max of 3 Beatles hits to maintain diversity

 New remove method in PriorityQueue
 

 Key: LUCENE-6066
 URL: https://issues.apache.org/jira/browse/LUCENE-6066
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/query/scoring
Reporter: Mark Harwood
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-PQRemoveV1.patch


 It would be useful to be able to remove existing elements from a 
 PriorityQueue. 
 The proposal is that a linear scan is performed to find the element being 
 removed and then the end element in heap[size] is swapped into this position 
 to perform the delete. The method downHeap() is then called to shuffle the 
 replacement element back down the array but the existing downHeap method must 
 be modified to allow picking up an entry from any point in the array rather 
 than always assuming the first element (which is its only current mode of 
 operation).
 A working javascript model of the proposal with animation is available here: 
 http://jsfiddle.net/grcmquf2/22/ 
 In tests the modified version of downHeap produces the same results as the 
 existing impl but adds the ability to push down from any point.
 An example use case that requires remove is where a client doesn't want more 
 than N matches for any given key (e.g. no more than 5 products from any one 
 retailer in a marketplace). In these circumstances a document that was 
 previously thought of as competitive has to be removed from the final PQ and 
 replaced with another doc (eg a retailer who already has 5 matches in the PQ 
 receives a 6th match which is better than his previous ones). This particular 
 process is managed by a special DiversifyingPriorityQueue which wraps the 
 main PriorityQueue and could be contributed as part of another issue if there 
 is interest in that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6771) Sending DIH request to non-leader can result in different number of successful documents

2014-11-20 Thread Greg Harris (JIRA)
Greg Harris created SOLR-6771:
-

 Summary: Sending DIH request to non-leader can result in different 
number of successful documents
 Key: SOLR-6771
 URL: https://issues.apache.org/jira/browse/SOLR-6771
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Greg Harris


Basically if you send a DIH request to the non-leader the following set of 
circumstances can occur:
1) If there are errors in some of the documents the request itself is rejected 
by the leader (try making a required field null with some documents to make 
sure there are rejections). 
2) This causes all documents on that request to appear to fail. The number of 
documents that a follower is able to update DIH with appears variable. 
3) You need to use a large number of documents it appears to see the anomaly. 

This results in the following error on the follower:
2014-11-20 12:06:16.470; 34054 [Thread-18] WARN  
org.apache.solr.update.processor.DistributedUpdateProcessor  – Error sending 
update
org.apache.solr.common.SolrException: Bad Request



request: 
http://10.0.2.15:8983/solr/collection1/update?update.distrib=TOLEADERdistrib.from=http%3A%2F%2F10.0.2.15%3A8982%2Fsolr%2Fcollection1%2Fwt=javabinversion=2
at 
org.apache.solr.client.solrj.impl.ConcurrentUpdateSolrServer$Runner.run(ConcurrentUpdateSolrServer.java:240)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:722)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6066) New remove method in PriorityQueue

2014-11-20 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219950#comment-14219950
 ] 

Michael McCandless commented on LUCENE-6066:


Thanks Mark, that makes sense, and that's a great example to help understand 
it.  The fact that diversity doesn't need to keep the filler is what allows 
it to be a single pass.

If we have this linear cost remove, what's the worst case complexity?  When all 
N hits have the same key but are visited from worst to best score?  Is it 
then O(N * M), where M is number of top hits I want?

bq. If the PQ set the current array position as a property of each element 
every time it moved them around I could pass the array index to remove() rather 
than an object that had to be scanned for

This seems promising, maybe as a separate dedicated (forked) PQ impl?  But how 
will you track the min element for each key in the PQ (to know which element to 
remove, when a more competitive hit with that key arrives)?

 New remove method in PriorityQueue
 

 Key: LUCENE-6066
 URL: https://issues.apache.org/jira/browse/LUCENE-6066
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/query/scoring
Reporter: Mark Harwood
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-PQRemoveV1.patch


 It would be useful to be able to remove existing elements from a 
 PriorityQueue. 
 The proposal is that a linear scan is performed to find the element being 
 removed and then the end element in heap[size] is swapped into this position 
 to perform the delete. The method downHeap() is then called to shuffle the 
 replacement element back down the array but the existing downHeap method must 
 be modified to allow picking up an entry from any point in the array rather 
 than always assuming the first element (which is its only current mode of 
 operation).
 A working javascript model of the proposal with animation is available here: 
 http://jsfiddle.net/grcmquf2/22/ 
 In tests the modified version of downHeap produces the same results as the 
 existing impl but adds the ability to push down from any point.
 An example use case that requires remove is where a client doesn't want more 
 than N matches for any given key (e.g. no more than 5 products from any one 
 retailer in a marketplace). In these circumstances a document that was 
 previously thought of as competitive has to be removed from the final PQ and 
 replaced with another doc (eg a retailer who already has 5 matches in the PQ 
 receives a 6th match which is better than his previous ones). This particular 
 process is managed by a special DiversifyingPriorityQueue which wraps the 
 main PriorityQueue and could be contributed as part of another issue if there 
 is interest in that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5833) Suggestor Version 2 doesn't support multiValued fields

2014-11-20 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219954#comment-14219954
 ] 

Michael McCandless commented on LUCENE-5833:


Oh thanks for the reminder [~varunthacker], I'll have a look.

 Suggestor Version 2 doesn't support multiValued fields
 --

 Key: LUCENE-5833
 URL: https://issues.apache.org/jira/browse/LUCENE-5833
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/other
Affects Versions: 4.8.1
Reporter: Greg Harris
Assignee: Steve Rowe
 Attachments: LUCENE-5833.patch, LUCENE-5833.patch, LUCENE-5833.patch, 
 SOLR-6210.patch


 So if you use a multiValued field in the new suggestor it will not pick up 
 terms for any term after the first one. So it treats the first term as the 
 only term it will make it's dictionary from. 
 This is the suggestor I'm talking about:
 https://issues.apache.org/jira/browse/SOLR-5378



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5123) invert the codec postings API

2014-11-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219969#comment-14219969
 ] 

ASF subversion and git services commented on LUCENE-5123:
-

Commit 1640807 from [~mikemccand] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1640807 ]

LUCENE-5123: fix changes

 invert the codec postings API
 -

 Key: LUCENE-5123
 URL: https://issues.apache.org/jira/browse/LUCENE-5123
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Robert Muir
Assignee: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5123.patch, LUCENE-5123.patch, LUCENE-5123.patch, 
 LUCENE-5123.patch, LUCENE-5123.patch


 Currently FieldsConsumer/PostingsConsumer/etc is a push oriented api, e.g. 
 FreqProxTermsWriter streams the postings at flush, and the default merge() 
 takes the incoming codec api and filters out deleted docs and pushes via 
 same api (but that can be overridden).
 It could be cleaner if we allowed for a pull model instead (like 
 DocValues). For example, maybe FreqProxTermsWriter could expose a Terms of 
 itself and just passed this to the codec consumer.
 This would give the codec more flexibility to e.g. do multiple passes if it 
 wanted to do things like encode high-frequency terms more efficiently with a 
 bitset-like encoding or other things...
 A codec can try to do things like this to some extent today, but its very 
 difficult (look at buffering in Pulsing). We made this change with DV and it 
 made a lot of interesting optimizations easy to implement...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5123) invert the codec postings API

2014-11-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219971#comment-14219971
 ] 

ASF subversion and git services commented on LUCENE-5123:
-

Commit 1640808 from [~mikemccand] in branch 'dev/trunk'
[ https://svn.apache.org/r1640808 ]

LUCENE-5123: fix changes

 invert the codec postings API
 -

 Key: LUCENE-5123
 URL: https://issues.apache.org/jira/browse/LUCENE-5123
 Project: Lucene - Core
  Issue Type: Wish
Reporter: Robert Muir
Assignee: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5123.patch, LUCENE-5123.patch, LUCENE-5123.patch, 
 LUCENE-5123.patch, LUCENE-5123.patch


 Currently FieldsConsumer/PostingsConsumer/etc is a push oriented api, e.g. 
 FreqProxTermsWriter streams the postings at flush, and the default merge() 
 takes the incoming codec api and filters out deleted docs and pushes via 
 same api (but that can be overridden).
 It could be cleaner if we allowed for a pull model instead (like 
 DocValues). For example, maybe FreqProxTermsWriter could expose a Terms of 
 itself and just passed this to the codec consumer.
 This would give the codec more flexibility to e.g. do multiple passes if it 
 wanted to do things like encode high-frequency terms more efficiently with a 
 bitset-like encoding or other things...
 A codec can try to do things like this to some extent today, but its very 
 difficult (look at buffering in Pulsing). We made this change with DV and it 
 made a lot of interesting optimizations easy to implement...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-11-20 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14219978#comment-14219978
 ] 

Uwe Schindler commented on LUCENE-5950:
---

Hi,
In preparation to this move I changed Jenkins servers as following:
- For now disabled trunk builds on ASF Jenkins. The current Java 8 installed 
there crushes in the network layer on solr tests. I will update it tomorrow.
- Policeman Jenkins was changed that trunk builds use Java 8+ in the JVM 
randomization script (easy change)
- Flonkings Jenkisn seems dead (HTTP times out, SSH closes connection). 
[~simonw]: Can you check this server?

 Move to Java 8 in trunk
 ---

 Key: LUCENE-5950
 URL: https://issues.apache.org/jira/browse/LUCENE-5950
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
 Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch


 The dev list thread [VOTE] Move trunk to Java 8 passed.
 http://markmail.org/thread/zcddxioz2yvsdqkc
 This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6066) New remove method in PriorityQueue

2014-11-20 Thread Mark Harwood (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220089#comment-14220089
 ] 

Mark Harwood commented on LUCENE-6066:
--

bq. But how will you track the min element for each key in the PQ (to know 
which element to remove, when a more competitive hit with that key arrives)?

I was thinking of this as a foundation: (pseudo code) 

{code:title=DiversifyingPriorityQueue.java|borderStyle=solid}
   abstract class KeyedElement {
   int pqPos;
   abstract Object getKey();
   }
   class DiversifyingPriorityQueueT extends KeyedElement extends 
PriorityQueueT {
FastRemovablePriorityQueueT mainPQ;
MapObject, PriorityQueue perKeyQueues;
  }
{code}

You can probably guess at the logic but it is based around: 
* making sure each key has a max of n entries using an entry in perKeyQueues.
* Evictions from the mainPQ will require removal from the related perKeyQueue
* Emptied perKeyQueues can be recycled for use with other keys
* Evictions from the perKeyQueue will require removal from the mainPQ

bq. This seems promising, maybe as a separate dedicated (forked) PQ impl?

Yes, introducing a linear-cost remove by marking elements with a position is an 
added cost that not all PQs will require so forking seems necessary. In this 
case a common abstraction for these different PQs would be useful for the 
places where results are consumed e.g. TopDocsCollector


 New remove method in PriorityQueue
 

 Key: LUCENE-6066
 URL: https://issues.apache.org/jira/browse/LUCENE-6066
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/query/scoring
Reporter: Mark Harwood
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-PQRemoveV1.patch


 It would be useful to be able to remove existing elements from a 
 PriorityQueue. 
 The proposal is that a linear scan is performed to find the element being 
 removed and then the end element in heap[size] is swapped into this position 
 to perform the delete. The method downHeap() is then called to shuffle the 
 replacement element back down the array but the existing downHeap method must 
 be modified to allow picking up an entry from any point in the array rather 
 than always assuming the first element (which is its only current mode of 
 operation).
 A working javascript model of the proposal with animation is available here: 
 http://jsfiddle.net/grcmquf2/22/ 
 In tests the modified version of downHeap produces the same results as the 
 existing impl but adds the ability to push down from any point.
 An example use case that requires remove is where a client doesn't want more 
 than N matches for any given key (e.g. no more than 5 products from any one 
 retailer in a marketplace). In these circumstances a document that was 
 previously thought of as competitive has to be removed from the final PQ and 
 replaced with another doc (eg a retailer who already has 5 matches in the PQ 
 receives a 6th match which is better than his previous ones). This particular 
 process is managed by a special DiversifyingPriorityQueue which wraps the 
 main PriorityQueue and could be contributed as part of another issue if there 
 is interest in that. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-11-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220161#comment-14220161
 ] 

ASF subversion and git services commented on LUCENE-5950:
-

Commit 1640833 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1640833 ]

LUCENE-5950: Move to Java 8 as minimum Java version

 Move to Java 8 in trunk
 ---

 Key: LUCENE-5950
 URL: https://issues.apache.org/jira/browse/LUCENE-5950
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
 Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch


 The dev list thread [VOTE] Move trunk to Java 8 passed.
 http://markmail.org/thread/zcddxioz2yvsdqkc
 This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_67) - Build # 4443 - Still Failing!

2014-11-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4443/
Java: 64bit/jdk1.7.0_67 -XX:-UseCompressedOops -XX:+UseSerialGC (asserts: true)

2 tests failed.
REGRESSION:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.testDistribSearch

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([3DCF993F0F4631A6]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeySafeLeaderTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([3DCF993F0F4631A6]:0)




Build Log:
[...truncated 10853 lines...]
   [junit4] Suite: org.apache.solr.cloud.ChaosMonkeySafeLeaderTest
   [junit4]   2 Creating dataDir: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.ChaosMonkeySafeLeaderTest-3DCF993F0F4631A6-001\init-core-data-001
   [junit4]   2 2182043 T3968 oas.SolrTestCaseJ4.buildSSLConfig Randomized ssl 
(true) and clientAuth (false)
   [junit4]   2 2182043 T3968 
oas.BaseDistributedSearchTestCase.initHostContext Setting hostContext system 
property: /ugpw/
   [junit4]   2 2182067 T3968 oas.SolrTestCaseJ4.setUp ###Starting 
testDistribSearch
   [junit4]   2 2182067 T3968 oasc.ZkTestServer.run STARTING ZK TEST SERVER
   [junit4]   1 client port:0.0.0.0/0.0.0.0:0
   [junit4]   2 2182073 T3969 oasc.ZkTestServer$ZKServerMain.runFromConfig 
Starting server
   [junit4]   2 2182196 T3968 oasc.ZkTestServer.run start zk server on 
port:56899
   [junit4]   2 2182196 T3968 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 2182199 T3968 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 2182206 T3976 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@2bb960f6 
name:ZooKeeperConnection Watcher:127.0.0.1:56899 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 2182206 T3968 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 2182206 T3968 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 2182206 T3968 oascc.SolrZkClient.makePath makePath: /solr
   [junit4]   2 2182215 T3968 
oascc.SolrZkClient.createZkCredentialsToAddAutomatically Using default 
ZkCredentialsProvider
   [junit4]   2 2182217 T3968 oascc.ConnectionManager.waitForConnected Waiting 
for client to connect to ZooKeeper
   [junit4]   2 2182226 T3979 oascc.ConnectionManager.process Watcher 
org.apache.solr.common.cloud.ConnectionManager@70e505b9 
name:ZooKeeperConnection Watcher:127.0.0.1:56899/solr got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2 2182227 T3968 oascc.ConnectionManager.waitForConnected Client 
is connected to ZooKeeper
   [junit4]   2 2182227 T3968 oascc.SolrZkClient.createZkACLProvider Using 
default ZkACLProvider
   [junit4]   2 2182227 T3968 oascc.SolrZkClient.makePath makePath: 
/collections/collection1
   [junit4]   2 2182232 T3968 oascc.SolrZkClient.makePath makePath: 
/collections/collection1/shards
   [junit4]   2 2182236 T3968 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection
   [junit4]   2 2182240 T3968 oascc.SolrZkClient.makePath makePath: 
/collections/control_collection/shards
   [junit4]   2 2182244 T3968 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig-tlog.xml
 to /configs/conf1/solrconfig.xml
   [junit4]   2 2182244 T3968 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.xml
   [junit4]   2 2182250 T3968 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\src\test-files\solr\collection1\conf\schema15.xml
 to /configs/conf1/schema.xml
   [junit4]   2 2182250 T3968 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/schema.xml
   [junit4]   2 2182258 T3968 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\src\test-files\solr\collection1\conf\solrconfig.snippet.randomindexconfig.xml
 to /configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 2182258 T3968 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/solrconfig.snippet.randomindexconfig.xml
   [junit4]   2 2182264 T3968 oasc.AbstractZkTestCase.putConfig put 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\core\src\test-files\solr\collection1\conf\stopwords.txt
 to /configs/conf1/stopwords.txt
   [junit4]   2 2182264 T3968 oascc.SolrZkClient.makePath makePath: 
/configs/conf1/stopwords.txt
   [junit4]   2 2182269 T3968 oasc.AbstractZkTestCase.putConfig 

[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-11-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220223#comment-14220223
 ] 

ASF subversion and git services commented on LUCENE-5950:
-

Commit 1640837 from [~thetaphi] in branch 'dev/trunk'
[ https://svn.apache.org/r1640837 ]

LUCENE-5950: Disable Eclipse null analysis on Java 8 (requires @Null stuff 
Robert and I hate)

 Move to Java 8 in trunk
 ---

 Key: LUCENE-5950
 URL: https://issues.apache.org/jira/browse/LUCENE-5950
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
 Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch


 The dev list thread [VOTE] Move trunk to Java 8 passed.
 http://markmail.org/thread/zcddxioz2yvsdqkc
 This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-11-20 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220272#comment-14220272
 ] 

Alexandre Rafalovitch commented on SOLR-3619:
-

Since we reopened: The other thing I am finding super-confusing is how the name 
in the -e parameter does not match the config set, nor does it match the names 
in the leftover example directory. Especially with the current non-gold-master 
issue.

Would it make sense to have things somehow consistent? 

Also, in the examples directory, _example-DIH_ can be used with _-e dih_ flag. 
But the multicore is not, right? Is there a reason for inconsistency? 

I feel that a beginner user would treat every inconsistency as meaningful 
and/or confusing.



 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5950) Move to Java 8 in trunk

2014-11-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220304#comment-14220304
 ] 

ASF subversion and git services commented on LUCENE-5950:
-

Commit 1640843 from [~rjernst] in branch 'dev/trunk'
[ https://svn.apache.org/r1640843 ]

LUCENE-5950: Remove ecj hacks no longer necessary with current ecj settings

 Move to Java 8 in trunk
 ---

 Key: LUCENE-5950
 URL: https://issues.apache.org/jira/browse/LUCENE-5950
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Ryan Ernst
 Attachments: LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch, 
 LUCENE-5950.patch, LUCENE-5950.patch, LUCENE-5950.patch


 The dev list thread [VOTE] Move trunk to Java 8 passed.
 http://markmail.org/thread/zcddxioz2yvsdqkc
 This issue is to actually move trunk to java 8.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-11-20 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220335#comment-14220335
 ] 

Alexandre Rafalovitch commented on SOLR-3619:
-

Sorry, for spamming this JIRA, but I think I just hit an even bigger *immutable 
master* issue. We have different examples stepping on each others' feet.

Specifically, the examples _techproducts_ and _schemaless_ both create the 
actual Solr collections inside the *server/solr* directory. So, if you run 
both, it will create two more collections there.

Then, if you run _cloud_ example, it will clone this augmented server directory 
and use that as a base of node1 and node2 (created, strangely enough, in the 
current directory).

So, when you actually try to access the Admin UI, it auto-discovers and tries 
to load those extra collections as well and fails as configName is - I am 
guessing - is relative to unexpected location and does not get resolved. 

{quote}
techproducts: 
org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
 Could not find configName for collection techproducts found:null
schemaless: 
org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
 Could not find configName for collection schemaless found:null 
{quote}

Would it make sense to:
# Make configsets not changeable by cloning the configuration
# Make all generated examples showing up in *solr/example directory as that's 
where the other examples are anyway
# Make switch question order in the *cloud* example to ask for collection name 
first and use that as a root directory name under the *solr/example* with nodes 
inside of that

I am also not sure that *cloud* example is restartable once it is shutdown. But 
that's probably a different issue.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-11-20 Thread Alexandre Rafalovitch (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220335#comment-14220335
 ] 

Alexandre Rafalovitch edited comment on SOLR-3619 at 11/21/14 1:00 AM:
---

Sorry, for spamming this JIRA, but I think I just hit an even bigger *immutable 
master* issue. We have different examples stepping on each others' feet.

Specifically, the examples _techproducts_ and _schemaless_ both create the 
actual Solr collections inside the *server/solr* directory. So, if you run 
both, it will create two more collections there.

Then, if you run _cloud_ example, it will clone this augmented server directory 
and use that as a base of node1 and node2 (created, strangely enough, in the 
current directory).

So, when you actually try to access the Admin UI, it auto-discovers and tries 
to load those extra collections as well and fails as configName is - I am 
guessing - is relative to unexpected location and does not get resolved. 

{quote}
techproducts: 
org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
 Could not find configName for collection techproducts found:null
schemaless: 
org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
 Could not find configName for collection schemaless found:null 
{quote}

Would it make sense to:
# Make configsets not changeable by cloning the configuration
# Make all generated examples showing up in *solr/example* directory as that's 
where the other examples are anyway
# Make switch question order in the *cloud* example to ask for collection name 
first and use that as a root directory name under the *solr/example* with nodes 
inside of that

I am also not sure that *cloud* example is restartable once it is shutdown. But 
that's probably a different issue.


was (Author: arafalov):
Sorry, for spamming this JIRA, but I think I just hit an even bigger *immutable 
master* issue. We have different examples stepping on each others' feet.

Specifically, the examples _techproducts_ and _schemaless_ both create the 
actual Solr collections inside the *server/solr* directory. So, if you run 
both, it will create two more collections there.

Then, if you run _cloud_ example, it will clone this augmented server directory 
and use that as a base of node1 and node2 (created, strangely enough, in the 
current directory).

So, when you actually try to access the Admin UI, it auto-discovers and tries 
to load those extra collections as well and fails as configName is - I am 
guessing - is relative to unexpected location and does not get resolved. 

{quote}
techproducts: 
org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
 Could not find configName for collection techproducts found:null
schemaless: 
org.apache.solr.common.cloud.ZooKeeperException:org.apache.solr.common.cloud.ZooKeeperException:
 Could not find configName for collection schemaless found:null 
{quote}

Would it make sense to:
# Make configsets not changeable by cloning the configuration
# Make all generated examples showing up in *solr/example directory as that's 
where the other examples are anyway
# Make switch question order in the *cloud* example to ask for collection name 
first and use that as a root directory name under the *solr/example* with nodes 
inside of that

I am also not sure that *cloud* example is restartable once it is shutdown. But 
that's probably a different issue.

 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6761) Ability to ignore commit and optimize requests from clients when running in SolrCloud mode.

2014-11-20 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220418#comment-14220418
 ] 

Ramkumar Aiyengar commented on SOLR-6761:
-

I like the idea, with the minor exception that it sounds wrong to return 200 
instead of a 4xx. The client is doing some effort to add the commit request and 
should know that it's not been respected. If it breaks them, so be it, they are 
doing something the system is not configured to do. They might actually even 
rely on the assumption that once the commit is done it's immediately available 
for search..

 Ability to ignore commit and optimize requests from clients when running in 
 SolrCloud mode.
 ---

 Key: SOLR-6761
 URL: https://issues.apache.org/jira/browse/SOLR-6761
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud, SolrJ
Reporter: Timothy Potter

 In most SolrCloud environments, it's advisable to only rely on auto-commits 
 (soft and hard) configured in solrconfig.xml and not send explicit commit 
 requests from client applications. In fact, I've seen cases where improperly 
 coded client applications can send commit requests too frequently, which can 
 lead to harming the cluster's health. 
 As a system administrator, I'd like the ability to disallow commit requests 
 from client applications. Ideally, I could configure the updateHandler to 
 ignore the requests and return an HTTP response code of my choosing as I may 
 not want to break existing client applications by returning an error. In 
 other words, I may want to just return 200 vs. 405. The same goes for 
 optimize requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6772) Support regex based atomic remove

2014-11-20 Thread Steven Bower (JIRA)
Steven Bower created SOLR-6772:
--

 Summary: Support regex based atomic remove
 Key: SOLR-6772
 URL: https://issues.apache.org/jira/browse/SOLR-6772
 Project: Solr
  Issue Type: Bug
Reporter: Steven Bower


This is a follow-on ticket from SOLR-3862 ... The goal here is to support regex 
based field value removal for the following use cases:

1. You may not know the values you like to remove, imagine a permissioning 
case: [ user-u1, user-u2, group-g1 ] where you want to remove all users (ie 
user-.*)

2. You may have a large number of values an it would be expensive to list them 
all but you could encapsulate in a regex.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3862) add remove as update option for atomically removing a value from a multivalued field

2014-11-20 Thread Steven Bower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220457#comment-14220457
 ] 

Steven Bower commented on SOLR-3862:


Added ticket for regex based removal SOLR-6772

 add remove as update option for atomically removing a value from a 
 multivalued field
 --

 Key: SOLR-3862
 URL: https://issues.apache.org/jira/browse/SOLR-3862
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Jim Musil
Assignee: Erick Erickson
 Fix For: 4.9, Trunk

 Attachments: SOLR-3862-2.patch, SOLR-3862-3.patch, SOLR-3862-4.patch, 
 SOLR-3862.patch, SOLR-3862.patch, SOLR-3862.patch, SOLR-3862.patch, 
 SOLR-3862.patch


 Currently you can atomically add a value to a multivalued field. It would 
 be useful to be able to remove a value from a multivalued field. 
 When you set a multivalued field to null, it destroys all values.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6772) Support regex based atomic remove

2014-11-20 Thread Steven Bower (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220458#comment-14220458
 ] 

Steven Bower commented on SOLR-6772:


Patch and tests shortly..

 Support regex based atomic remove
 -

 Key: SOLR-6772
 URL: https://issues.apache.org/jira/browse/SOLR-6772
 Project: Solr
  Issue Type: Bug
Reporter: Steven Bower

 This is a follow-on ticket from SOLR-3862 ... The goal here is to support 
 regex based field value removal for the following use cases:
 1. You may not know the values you like to remove, imagine a permissioning 
 case: [ user-u1, user-u2, group-g1 ] where you want to remove all users (ie 
 user-.*)
 2. You may have a large number of values an it would be expensive to list 
 them all but you could encapsulate in a regex.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6773) Remove the multicore example as the DIH and cloud examples illustrate multicore behavior

2014-11-20 Thread Timothy Potter (JIRA)
Timothy Potter created SOLR-6773:


 Summary: Remove the multicore example as the DIH and cloud 
examples illustrate multicore behavior
 Key: SOLR-6773
 URL: https://issues.apache.org/jira/browse/SOLR-6773
 Project: Solr
  Issue Type: Improvement
Reporter: Timothy Potter


As discussed in SOLR-3619, we should get rid of the multicore example; there 
are unit tests that rely on that directory so they will need to get refactored. 
May make sense to just move the multicore directory under test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #763: POMs out of sync

2014-11-20 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/763/

3 tests failed.
FAILED:  
org.apache.solr.hadoop.MorphlineMapperTest.org.apache.solr.hadoop.MorphlineMapperTest

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at __randomizedtesting.SeedInfo.seed([C0DDD1FB39FEE976]:0)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.before(TestRuleTemporaryFilesCleanup.java:92)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.before(TestRuleAdapter.java:26)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:35)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.hadoop.MorphlineBasicMiniMRTest.testPathParts

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([65795FD50E02A01B]:0)


FAILED:  
org.apache.solr.hadoop.MorphlineBasicMiniMRTest.org.apache.solr.hadoop.MorphlineBasicMiniMRTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([65795FD50E02A01B]:0)




Build Log:
[...truncated 53922 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:548: 
The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:200: 
The following error occurred while executing this line:
: Java returned: 1

Total time: 405 minutes 19 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-3619) Rename 'example' dir to 'server' and pull examples into an 'examples' directory

2014-11-20 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220482#comment-14220482
 ] 

Timothy Potter commented on SOLR-3619:
--

Thanks for digging in [~arafalov] ... your feedback is much appreciated.

Agreed on the *create_core* cloning the configset, e.g. if I do: {{bin/solr 
create_core -n foo -c basic_configs}}, then the create_core action will:
{code}
mkdir server/solr/foo
cp -r server/solr/configsets/basic_configs/conf server/solr/foo/conf
{code}

As for the names of the configsets and the examples, I used the names Hoss 
suggested in his comment above for the configsets but heard rumblings at Rev 
that others didn't like the long names ;-) It's easy to change the names at 
this point, so what do we want them to be called? I'm cool with whatever people 
think are short but descriptive enough.

multicore - ugh! I intended on just getting rid of it, but there are unit tests 
that rely on that directory. It should be removed under a separate ticket 
(SOLR-6773); I don't think it should be an example anymore but if we don't get 
rid of it, then I can add it back as an example in the bin/solr script as the 
consistency will be confusing and that's exactly what we don't want.

As for the {{bin/solr -e cloud}} example being affected by artifacts from 
running other examples, I think we can just have the script clean-up 
unrecognized directories after cloning, i.e.

{code}
cp -r server node1
rm -r node1/solr/unrecognized_dir
cp -r node1 node2
{code}

This is just an example, so putting the node1 directory into the expected state 
after cloning seems reasonable, albeit a bit of a maintenance issue if the list 
of expected dirs changes, but that happens very infrequently.

Lastly, you can restart the cloud example, but you have to just use the 
bin/solr options directly. For instance, if you launched {{bin/solr -e cloud 
-noprompt}} (2 nodes on the default ports), you could stop and restart using:

{code}
bin/solr stop -all
bin/solr restart -c -p 8983 -d node1
bin/solr restart -c -p 7574 -d node2 -z localhost:9983
{code}

The example prints out these commands as it runs to help the user make the link 
between what the example is doing and the command-line options supported by the 
script.


 Rename 'example' dir to 'server' and pull examples into an 'examples' 
 directory
 ---

 Key: SOLR-3619
 URL: https://issues.apache.org/jira/browse/SOLR-3619
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Timothy Potter
 Fix For: 5.0, Trunk

 Attachments: SOLR-3619.patch, SOLR-3619.patch, SOLR-3619.patch, 
 managed-schema, server-name-layout.png, solrconfig.xml






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.10-Linux (64bit/jdk1.7.0_67) - Build # 106 - Failure!

2014-11-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.10-Linux/106/
Java: 64bit/jdk1.7.0_67 -XX:+UseCompressedOops -XX:+UseParallelGC (asserts: 
true)

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([3D096576819A1AA2]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:332)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:620)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:183)
at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

Error Message:
Clean up static fields (in @AfterClass?), your test seems to hang on to 
approximately 10,936,160 bytes (threshold is 10,485,760). Field reference sizes 
(counted individually):   - 12,153,184 bytes, protected static 
org.apache.solr.core.SolrConfig org.apache.solr.SolrTestCaseJ4.solrConfig   - 
11,768,592 bytes, protected static 
org.apache.solr.util.TestHarness$LocalRequestFactory 
org.apache.solr.SolrTestCaseJ4.lrf   - 11,768,352 bytes, protected static 
org.apache.solr.util.TestHarness org.apache.solr.SolrTestCaseJ4.h   - 328 
bytes, private static java.util.regex.Pattern 
org.apache.solr.SolrTestCaseJ4.nonEscapedSingleQuotePattern   - 232 bytes, 
public static java.io.File org.apache.solr.cloud.AbstractZkTestCase.SOLRHOME   
- 224 bytes, private static java.util.regex.Pattern 
org.apache.solr.SolrTestCaseJ4.escapedSingleQuotePattern   - 208 bytes, public 
static org.junit.rules.TestRule org.apache.solr.SolrTestCaseJ4.solrClassRules   
- 200 bytes, protected static java.lang.String 
org.apache.solr.SolrTestCaseJ4.testSolrHome   - 128 bytes, private static 
java.lang.String org.apache.solr.SolrTestCaseJ4.factoryProp   - 72 bytes, 
protected static java.lang.String org.apache.solr.SolrTestCaseJ4.configString   
- 64 bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.coreName   - 64 bytes, protected static 
java.lang.String org.apache.solr.SolrTestCaseJ4.schemaString

Stack Trace:
junit.framework.AssertionFailedError: Clean up static fields (in @AfterClass?), 
your test seems to hang on to approximately 10,936,160 bytes (threshold is 
10,485,760). Field reference sizes (counted individually):
  - 12,153,184 bytes, protected static org.apache.solr.core.SolrConfig 
org.apache.solr.SolrTestCaseJ4.solrConfig
  - 11,768,592 bytes, protected static 
org.apache.solr.util.TestHarness$LocalRequestFactory 
org.apache.solr.SolrTestCaseJ4.lrf
  - 11,768,352 bytes, protected static 

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1949 - Still Failing!

2014-11-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1949/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC (asserts: 
true)

All tests passed

Build Log:
[...truncated 43529 lines...]
BUILD FAILED
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:515: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/build.xml:79: The following 
error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/build.xml:188: The 
following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/common-build.xml:1893: 
The following error occurred while executing this line:
/Users/jenkins/workspace/Lucene-Solr-trunk-MacOSX/lucene/common-build.xml:1921: 
Compile failed; see the compiler error output for details.

Total time: 206 minutes 57 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.8.0 
-XX:-UseCompressedOops -XX:+UseConcMarkSweepGC (asserts: true)
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-6533) Support editing common solrconfig.xml values

2014-11-20 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220567#comment-14220567
 ] 

ASF subversion and git services commented on SOLR-6533:
---

Commit 1640857 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1640857 ]

SOLR-6533 Added a testcase for config reload, hardened watching for changes

 Support editing common solrconfig.xml values
 

 Key: SOLR-6533
 URL: https://issues.apache.org/jira/browse/SOLR-6533
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
 Attachments: SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, 
 SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, 
 SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch


 There are a bunch of properties in solrconfig.xml which users want to edit. 
 We will attack them first
 These properties will be persisted to a separate file called config.json (or 
 whatever file). Instead of saving in the same format we will have well known 
 properties which users can directly edit
 {code}
 updateHandler.autoCommit.maxDocs
 query.filterCache.initialSize
 {code}   
 The api will be modeled around the bulk schema API
 {code:javascript}
 curl http://localhost:8983/solr/collection1/config -H 
 'Content-type:application/json'  -d '{
 set-property : {updateHandler.autoCommit.maxDocs:5},
 unset-property: updateHandler.autoCommit.maxDocs
 }'
 {code}
 {code:javascript}
 //or use this to set ${mypropname} values
 curl http://localhost:8983/solr/collection1/config -H 
 'Content-type:application/json'  -d '{
 set-user-property : {mypropname:my_prop_val},
 unset-user-property:{mypropname}
 }'
 {code}
 The values stored in the config.json will always take precedence and will be 
 applied after loading solrconfig.xml. 
 An http GET on /config path will give the real config that is applied . 
 An http GET of/config/overlay gives out the content of the configOverlay.json
 /config/component-name gives only the fchild of the same name from /config



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6533) Support editing common solrconfig.xml values

2014-11-20 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6533:
-
Attachment: SOLR-6533.patch

the patch for the latest commit

 Support editing common solrconfig.xml values
 

 Key: SOLR-6533
 URL: https://issues.apache.org/jira/browse/SOLR-6533
 Project: Solr
  Issue Type: Sub-task
Reporter: Noble Paul
 Attachments: SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, 
 SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, 
 SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch, SOLR-6533.patch


 There are a bunch of properties in solrconfig.xml which users want to edit. 
 We will attack them first
 These properties will be persisted to a separate file called config.json (or 
 whatever file). Instead of saving in the same format we will have well known 
 properties which users can directly edit
 {code}
 updateHandler.autoCommit.maxDocs
 query.filterCache.initialSize
 {code}   
 The api will be modeled around the bulk schema API
 {code:javascript}
 curl http://localhost:8983/solr/collection1/config -H 
 'Content-type:application/json'  -d '{
 set-property : {updateHandler.autoCommit.maxDocs:5},
 unset-property: updateHandler.autoCommit.maxDocs
 }'
 {code}
 {code:javascript}
 //or use this to set ${mypropname} values
 curl http://localhost:8983/solr/collection1/config -H 
 'Content-type:application/json'  -d '{
 set-user-property : {mypropname:my_prop_val},
 unset-user-property:{mypropname}
 }'
 {code}
 The values stored in the config.json will always take precedence and will be 
 applied after loading solrconfig.xml. 
 An http GET on /config path will give the real config that is applied . 
 An http GET of/config/overlay gives out the content of the configOverlay.json
 /config/component-name gives only the fchild of the same name from /config



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_20) - Build # 11644 - Failure!

2014-11-20 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11644/
Java: 32bit/jdk1.8.0_20 -server -XX:+UseG1GC (asserts: false)

All tests passed

Build Log:
[...truncated 43731 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:515: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/build.xml:79: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/build.xml:188: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1893:
 The following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-trunk-Linux/lucene/common-build.xml:1921:
 Compile failed; see the compiler error output for details.

Total time: 105 minutes 57 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.8.0_20 -server 
-XX:+UseG1GC (asserts: false)
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-5205) [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to classic QueryParser

2014-11-20 Thread Modassar Ather (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5205?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14220574#comment-14220574
 ] 

Modassar Ather commented on LUCENE-5205:


Sorry for replying little late [~talli...@apache.org].
bq. Ah, ok, so to confirm, no further action is required from me on the  issue?
As of now I see no issue with usage of  in the query as it is getting removed 
in my analyzer's chain.

bq. Are you ok with single quotes becoming operators? Can you see a way of 
improving that behavior?
We have been using double quotes with square brackets for phrase and nested 
phrase queries respectively. Have not used single quotes for the same.

 [PATCH] SpanQueryParser with recursion, analysis and syntax very similar to 
 classic QueryParser
 ---

 Key: LUCENE-5205
 URL: https://issues.apache.org/jira/browse/LUCENE-5205
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Reporter: Tim Allison
  Labels: patch
 Attachments: LUCENE-5205-cleanup-tests.patch, 
 LUCENE-5205-date-pkg-prvt.patch, LUCENE-5205.patch.gz, LUCENE-5205.patch.gz, 
 LUCENE-5205_dateTestReInitPkgPrvt.patch, 
 LUCENE-5205_improve_stop_word_handling.patch, 
 LUCENE-5205_smallTestMods.patch, LUCENE_5205.patch, 
 SpanQueryParser_v1.patch.gz, patch.txt


 This parser extends QueryParserBase and includes functionality from:
 * Classic QueryParser: most of its syntax
 * SurroundQueryParser: recursive parsing for near and not clauses.
 * ComplexPhraseQueryParser: can handle near queries that include multiterms 
 (wildcard, fuzzy, regex, prefix),
 * AnalyzingQueryParser: has an option to analyze multiterms.
 At a high level, there's a first pass BooleanQuery/field parser and then a 
 span query parser handles all terminal nodes and phrases.
 Same as classic syntax:
 * term: test 
 * fuzzy: roam~0.8, roam~2
 * wildcard: te?t, test*, t*st
 * regex: /\[mb\]oat/
 * phrase: jakarta apache
 * phrase with slop: jakarta apache~3
 * default or clause: jakarta apache
 * grouping or clause: (jakarta apache)
 * boolean and +/-: (lucene OR apache) NOT jakarta; +lucene +apache -jakarta
 * multiple fields: title:lucene author:hatcher
  
 Main additions in SpanQueryParser syntax vs. classic syntax:
 * Can require in order for phrases with slop with the \~ operator: 
 jakarta apache\~3
 * Can specify not near: fever bieber!\~3,10 ::
 find fever but not if bieber appears within 3 words before or 10 
 words after it.
 * Fully recursive phrasal queries with \[ and \]; as in: \[\[jakarta 
 apache\]~3 lucene\]\~4 :: 
 find jakarta within 3 words of apache, and that hit has to be within 
 four words before lucene
 * Can also use \[\] for single level phrasal queries instead of  as in: 
 \[jakarta apache\]
 * Can use or grouping clauses in phrasal queries: apache (lucene solr)\~3 
 :: find apache and then either lucene or solr within three words.
 * Can use multiterms in phrasal queries: jakarta\~1 ap*che\~2
 * Did I mention full recursion: \[\[jakarta\~1 ap*che\]\~2 (solr~ 
 /l\[ou\]\+\[cs\]\[en\]\+/)]\~10 :: Find something like jakarta within two 
 words of ap*che and that hit has to be within ten words of something like 
 solr or that lucene regex.
 * Can require at least x number of hits at boolean level: apache AND (lucene 
 solr tika)~2
 * Can use negative only query: -jakarta :: Find all docs that don't contain 
 jakarta
 * Can use an edit distance  2 for fuzzy query via SlowFuzzyQuery (beware of 
 potential performance issues!).
 Trivial additions:
 * Can specify prefix length in fuzzy queries: jakarta~1,2 (edit distance =1, 
 prefix =2)
 * Can specifiy Optimal String Alignment (OSA) vs Levenshtein for distance 
 =2: (jakarta~1 (OSA) vs jakarta~1(Levenshtein)
 This parser can be very useful for concordance tasks (see also LUCENE-5317 
 and LUCENE-5318) and for analytical search.  
 Until LUCENE-2878 is closed, this might have a use for fans of SpanQuery.
 Most of the documentation is in the javadoc for SpanQueryParser.
 Any and all feedback is welcome.  Thank you.
 Until this is added to the Lucene project, I've added a standalone 
 lucene-addons repo (with jars compiled for the latest stable build of Lucene) 
  on [github|https://github.com/tballison/lucene-addons].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org