[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_11) - Build # 11005 - Failure!

2014-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11005/
Java: 32bit/jdk1.8.0_11 -server -XX:+UseG1GC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.OverseerStatusTest.testDistribSearch

Error Message:
reloadcollection the collection time out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
reloadcollection the collection time out:180s
at 
__randomizedtesting.SeedInfo.seed([75ABF982A2A7297C:F44D779AD5F84940]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:550)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.invokeCollectionApi(AbstractFullDistribZkTestBase.java:1739)
at 
org.apache.solr.cloud.OverseerStatusTest.doTest(OverseerStatusTest.java:103)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (SOLR-6304) Transforming and Indexing custom JSON data

2014-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093821#comment-14093821
 ] 

ASF subversion and git services commented on SOLR-6304:
---

Commit 1617424 from [~noble.paul] in branch 'dev/trunk'
[ https://svn.apache.org/r1617424 ]

SOLR-6304 wildcard fix

 Transforming and Indexing custom JSON data
 --

 Key: SOLR-6304
 URL: https://issues.apache.org/jira/browse/SOLR-6304
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.10

 Attachments: SOLR-6304.patch, SOLR-6304.patch


 example
 {noformat}
 curl 
 localhost:8983/update/json/docs?split=/batters/batterf=recipeId:/idf=recipeType:/typef=id:/batters/batter/idf=type:/batters/batter/type
  -d '
 {
   id: 0001,
   type: donut,
   name: Cake,
   ppu: 0.55,
   batters: {
   batter:
   [
   { id: 1001, type: 
 Regular },
   { id: 1002, type: 
 Chocolate },
   { id: 1003, type: 
 Blueberry },
   { id: 1004, type: 
 Devil's Food }
   ]
   }
 }'
 {noformat}
 should produce the following output docs
 {noformat}
 { recipeId:001, recipeType:donut, id:1001, type:Regular }
 { recipeId:001, recipeType:donut, id:1002, type:Chocolate }
 { recipeId:001, recipeType:donut, id:1003, type:Blueberry }
 { recipeId:001, recipeType:donut, id:1004, type:Devil's food }
 {noformat}
 the split param is the element in the tree where it should be split into 
 multiple docs. The 'f' are field name mappings



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6304) Transforming and Indexing custom JSON data

2014-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093827#comment-14093827
 ] 

ASF subversion and git services commented on SOLR-6304:
---

Commit 1617425 from [~noble.paul] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1617425 ]

SOLR-6304 wildcard fix

 Transforming and Indexing custom JSON data
 --

 Key: SOLR-6304
 URL: https://issues.apache.org/jira/browse/SOLR-6304
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0, 4.10

 Attachments: SOLR-6304.patch, SOLR-6304.patch


 example
 {noformat}
 curl 
 localhost:8983/update/json/docs?split=/batters/batterf=recipeId:/idf=recipeType:/typef=id:/batters/batter/idf=type:/batters/batter/type
  -d '
 {
   id: 0001,
   type: donut,
   name: Cake,
   ppu: 0.55,
   batters: {
   batter:
   [
   { id: 1001, type: 
 Regular },
   { id: 1002, type: 
 Chocolate },
   { id: 1003, type: 
 Blueberry },
   { id: 1004, type: 
 Devil's Food }
   ]
   }
 }'
 {noformat}
 should produce the following output docs
 {noformat}
 { recipeId:001, recipeType:donut, id:1001, type:Regular }
 { recipeId:001, recipeType:donut, id:1002, type:Chocolate }
 { recipeId:001, recipeType:donut, id:1003, type:Blueberry }
 { recipeId:001, recipeType:donut, id:1004, type:Devil's food }
 {noformat}
 the split param is the element in the tree where it should be split into 
 multiple docs. The 'f' are field name mappings



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0_11) - Build # 10886 - Failure!

2014-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10886/
Java: 64bit/jdk1.8.0_11 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([D2076FDE16C233E1]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:332)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:617)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:180)
at sun.reflect.GeneratedMethodAccessor30.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

Error Message:
4 threads leaked from SUITE scope at 
org.apache.solr.rest.TestManagedResourceStorage: 1) Thread[id=3587, 
name=coreZkRegister-1940-thread-1, state=WAITING, 
group=TGRP-TestManagedResourceStorage] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=3584, 
name=searcherExecutor-1946-thread-1, state=WAITING, 
group=TGRP-TestManagedResourceStorage] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=3585, 
name=Thread-1523, state=WAITING, group=TGRP-TestManagedResourceStorage] 
at java.lang.Object.wait(Native Method) at 
java.lang.Object.wait(Object.java:502) at 
org.apache.solr.core.CloserThread.run(CoreContainer.java:894)4) 
Thread[id=3580, 

[jira] [Created] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Noble Paul (JIRA)
Noble Paul created SOLR-6365:


 Summary: specify  appends, defaults, invariants outside of the 
component
 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul


The components are configured in solrconfig.xml mostly for specifying these 
extra parameters. If we separate these out, we can avoid specifying the 
components altogether and make solrconfig much simpler

example
{code:xml}
 !-- these are top level tags not specified inside any components --
params  path=/dataimport defaults=config=data-config.xml/
params path=/update/* defaults=wt=json/
params path=/some-other-path* defaults=a=bc=de=f invariants=x=y 
appends=i=j/
{code}
The idea is to use the parameters in the  same format as we pass in the http 
request and eliminate specifying our default components in solrconfig.xml

 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6365:
-

Description: 
The components are configured in solrconfig.xml mostly for specifying these 
extra parameters. If we separate these out, we can avoid specifying the 
components altogether and make solrconfig much simpler. Eventually we want 
users to see all funtions as paths instead of components and control these 
params from outside , through an API and persisted in ZK

example
{code:xml}
 !-- these are top level tags not specified inside any components --
params  path=/dataimport defaults=config=data-config.xml/
params path=/update/* defaults=wt=json/
params path=/some-other-path* defaults=a=bc=de=f invariants=x=y 
appends=i=j/
{code}
The idea is to use the parameters in the  same format as we pass in the http 
request and eliminate specifying our default components in solrconfig.xml

 

  was:
The components are configured in solrconfig.xml mostly for specifying these 
extra parameters. If we separate these out, we can avoid specifying the 
components altogether and make solrconfig much simpler

example
{code:xml}
 !-- these are top level tags not specified inside any components --
params  path=/dataimport defaults=config=data-config.xml/
params path=/update/* defaults=wt=json/
params path=/some-other-path* defaults=a=bc=de=f invariants=x=y 
appends=i=j/
{code}
The idea is to use the parameters in the  same format as we pass in the http 
request and eliminate specifying our default components in solrconfig.xml

 


 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6366) When I use minimum match and maxCollationTries parameters together in edismax, Solr gets stuck

2014-08-12 Thread JIRA
Harun Reşit Zafer created SOLR-6366:
---

 Summary: When I use minimum match and maxCollationTries parameters 
together in edismax, Solr gets stuck
 Key: SOLR-6366
 URL: https://issues.apache.org/jira/browse/SOLR-6366
 Project: Solr
  Issue Type: Bug
  Components: query parsers, spellchecker
Affects Versions: 4.9
 Environment: Windows-7 64-bit
Reporter: Harun Reşit Zafer


In the following configuration when I use mm and maxCollationTries parameters 
together Solr gets stuck with no exception. Server starts, I can see web admin 
gui but I can't navigate btw tabs. It just says loading.

I tried different values for both parameters and found that values for mm less 
than %40 still works. 


|requestHandler name=/select class=solr.SearchHandler
  !-- default values for query parameters can be specified, these
   will be overridden by parameters in the request
--
   lst name=defaults
 str name=echoParamsexplicit/str
 str name=defTypeedismax/str
 int name=timeAllowed1000/int
 str name=qftitle^3 title_s^2 content/str
 str name=pftitle content/str
 str name=flid,title,content,score/str
 float name=tie0.1/float
 str name=lowercaseOperatorstrue/str
 str name=stopwordstrue/str
 !-- str name=mm75%/str--
 int name=rows10/int

 str name=spellcheckon/str
 str name=spellcheck.dictionarydefault/str
 str name=spellcheck.dictionarywordbreak/str
 str name=spellcheck.onlyMorePopulartrue/str
 str name=spellcheck.count5/str
 str name=spellcheck.maxResultsForSuggest5/str
 str name=spellcheck.extendedResultsfalse/str
 str name=spellcheck.alternativeTermCount2/str
 str name=spellcheck.collatetrue/str
 str name=spellcheck.collateExtendedResultstrue/str
 str name=spellcheck.maxCollationTries5/str
 !-- str name=spellcheck.collateParam.mm100%/str--

 str name=spellcheck.maxCollations3/str
   /lst

   arr name=last-components
 strspellcheck/str
   /arr

  /requestHandler

Any idea? Thanks 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.8.0) - Build # 1729 - Still Failing!

2014-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/1729/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
REGRESSION:  org.apache.solr.TestDistributedGrouping.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: https://127.0.0.1:57110

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:57110
at 
__randomizedtesting.SeedInfo.seed([5ADDAE45D749D7ED:DB3B205DA016B7D1]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:560)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:116)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:102)
at 
org.apache.solr.BaseDistributedSearchTestCase.index_specific(BaseDistributedSearchTestCase.java:489)
at 
org.apache.solr.TestDistributedGrouping.doTest(TestDistributedGrouping.java:139)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:873)
at sun.reflect.GeneratedMethodAccessor42.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-trunk-Linux (64bit/ibm-j9-jdk7) - Build # 11006 - Still Failing!

2014-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11006/
Java: 64bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

2 tests failed.
REGRESSION:  
org.apache.lucene.codecs.lucene41.TestLucene41PostingsFormat.testRamBytesUsed

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([D7D02A10D3449D7C:25733850193B822A]:0)
at 
org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer$NormMap.getOrd(Lucene49NormsConsumer.java:249)
at 
org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer.addNumericField(Lucene49NormsConsumer.java:150)
at 
org.apache.lucene.index.NumericDocValuesWriter.flush(NumericDocValuesWriter.java:92)
at 
org.apache.lucene.index.DefaultIndexingChain.writeNorms(DefaultIndexingChain.java:190)
at 
org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:94)
at 
org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:441)
at 
org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:510)
at 
org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:621)
at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3051)
at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3027)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1666)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1642)
at 
org.apache.lucene.index.BaseIndexFileFormatTestCase.testRamBytesUsed(BaseIndexFileFormatTestCase.java:222)
at 
org.apache.lucene.index.BasePostingsFormatTestCase.testRamBytesUsed(BasePostingsFormatTestCase.java:94)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6365:
-

Description: 
The components are configured in solrconfig.xml mostly for specifying these 
extra parameters. If we separate these out, we can avoid specifying the 
components altogether and make solrconfig much simpler. Eventually we want 
users to see all funtions as paths instead of components and control these 
params from outside , through an API and persisted in ZK

example
{code:xml}
 !-- these are top level tags not specified inside any components --
params  path=/dataimport defaults=config=data-config.xml/
params path=/update/* defaults=wt=json/
params path=/some-other-path* defaults=a=bc=de=f invariants=x=y 
appends=i=j/
!-- use json for all paths and _txt as the default search field--
params path=/** defaults=wt=jsondf=_txt /
{code}
The idea is to use the parameters in the  same format as we pass in the http 
request and eliminate specifying our default components in solrconfig.xml

 

  was:
The components are configured in solrconfig.xml mostly for specifying these 
extra parameters. If we separate these out, we can avoid specifying the 
components altogether and make solrconfig much simpler. Eventually we want 
users to see all funtions as paths instead of components and control these 
params from outside , through an API and persisted in ZK

example
{code:xml}
 !-- these are top level tags not specified inside any components --
params  path=/dataimport defaults=config=data-config.xml/
params path=/update/* defaults=wt=json/
params path=/some-other-path* defaults=a=bc=de=f invariants=x=y 
appends=i=j/
{code}
The idea is to use the parameters in the  same format as we pass in the http 
request and eliminate specifying our default components in solrconfig.xml

 


 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 !-- use json for all paths and _txt as the default search field--
 params path=/** defaults=wt=jsondf=_txt /
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-08-12 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093891#comment-14093891
 ] 

Adrien Grand commented on LUCENE-5879:
--

Thanks Mike for working on this, this is a very exciting issue! I'm very 
curious what the space/speed trade-off will look-like compared to static prefix 
encoding like NumericUtils does.

bq. Maybe we need a new FieldType boolean computeAutoPrefixTerms

Why would it be needed? If the search APIs use intersect, this should be 
transparent?





 Add auto-prefix terms to block tree terms dict
 --

 Key: LUCENE-5879
 URL: https://issues.apache.org/jira/browse/LUCENE-5879
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5879.patch, LUCENE-5879.patch


 This cool idea to generalize numeric/trie fields came from Adrien:
 Today, when we index a numeric field (LongField, etc.) we pre-compute
 (via NumericTokenStream) outside of indexer/codec which prefix terms
 should be indexed.
 But this can be inefficient: you set a static precisionStep, and
 always add those prefix terms regardless of how the terms in the field
 are actually distributed.  Yet typically in real world applications
 the terms have a non-random distribution.
 So, it should be better if instead the terms dict decides where it
 makes sense to insert prefix terms, based on how dense the terms are
 in each region of term space.
 This way we can speed up query time for both term (e.g. infix
 suggester) and numeric ranges, and it should let us use less index
 space and get faster range queries.
  
 This would also mean that min/maxTerm for a numeric field would now be
 correct, vs today where the externally computed prefix terms are
 placed after the full precision terms, causing hairy code like
 NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
 feasible.
 The terms dict can also do tricks not possible if you must live on top
 of its APIs, e.g. to handle the adversary/over-constrained case when a
 given prefix has too many terms following it but finer prefixes
 have too few (what block tree calls floor term blocks).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-08-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093924#comment-14093924
 ] 

Michael McCandless commented on LUCENE-5879:


{quote}
bq. Maybe we need a new FieldType boolean computeAutoPrefixTerms

Why would it be needed? If the search APIs use intersect, this should be 
transparent?
{quote}

I think being totally transparent would be the best solution!  This would mean 
BT will always index auto-prefix terms for DOCS_ONLY fields ... I'll just have 
to test what the indexing time / disk usage cost is.  If we need to make it 
optional at indexing time, I'm not sure what the API should look like to make 
it easy...

 Add auto-prefix terms to block tree terms dict
 --

 Key: LUCENE-5879
 URL: https://issues.apache.org/jira/browse/LUCENE-5879
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5879.patch, LUCENE-5879.patch


 This cool idea to generalize numeric/trie fields came from Adrien:
 Today, when we index a numeric field (LongField, etc.) we pre-compute
 (via NumericTokenStream) outside of indexer/codec which prefix terms
 should be indexed.
 But this can be inefficient: you set a static precisionStep, and
 always add those prefix terms regardless of how the terms in the field
 are actually distributed.  Yet typically in real world applications
 the terms have a non-random distribution.
 So, it should be better if instead the terms dict decides where it
 makes sense to insert prefix terms, based on how dense the terms are
 in each region of term space.
 This way we can speed up query time for both term (e.g. infix
 suggester) and numeric ranges, and it should let us use less index
 space and get faster range queries.
  
 This would also mean that min/maxTerm for a numeric field would now be
 correct, vs today where the externally computed prefix terms are
 placed after the full precision terms, causing hairy code like
 NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
 feasible.
 The terms dict can also do tricks not possible if you must live on top
 of its APIs, e.g. to handle the adversary/over-constrained case when a
 given prefix has too many terms following it but finer prefixes
 have too few (what block tree calls floor term blocks).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-08-12 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093926#comment-14093926
 ] 

Robert Muir commented on LUCENE-5879:
-

If the existing API is confusing, making it worse by adding a confusing boolean 
won't really fix the problem.

 Add auto-prefix terms to block tree terms dict
 --

 Key: LUCENE-5879
 URL: https://issues.apache.org/jira/browse/LUCENE-5879
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5879.patch, LUCENE-5879.patch


 This cool idea to generalize numeric/trie fields came from Adrien:
 Today, when we index a numeric field (LongField, etc.) we pre-compute
 (via NumericTokenStream) outside of indexer/codec which prefix terms
 should be indexed.
 But this can be inefficient: you set a static precisionStep, and
 always add those prefix terms regardless of how the terms in the field
 are actually distributed.  Yet typically in real world applications
 the terms have a non-random distribution.
 So, it should be better if instead the terms dict decides where it
 makes sense to insert prefix terms, based on how dense the terms are
 in each region of term space.
 This way we can speed up query time for both term (e.g. infix
 suggester) and numeric ranges, and it should let us use less index
 space and get faster range queries.
  
 This would also mean that min/maxTerm for a numeric field would now be
 correct, vs today where the externally computed prefix terms are
 placed after the full precision terms, causing hairy code like
 NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
 feasible.
 The terms dict can also do tricks not possible if you must live on top
 of its APIs, e.g. to handle the adversary/over-constrained case when a
 given prefix has too many terms following it but finer prefixes
 have too few (what block tree calls floor term blocks).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (64bit/ibm-j9-jdk7) - Build # 11006 - Still Failing!

2014-08-12 Thread Michael McCandless
This looks like the same initialized final field in class instance
becomes null bug in J9 that we've hit a few times ...?

In this case it's NormMap.singleByteRange, which is clearly
initialized to new short[256] ...

Mike McCandless

http://blog.mikemccandless.com


On Tue, Aug 12, 2014 at 3:47 AM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11006/
 Java: 64bit/ibm-j9-jdk7 
 -Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

 2 tests failed.
 REGRESSION:  
 org.apache.lucene.codecs.lucene41.TestLucene41PostingsFormat.testRamBytesUsed

 Error Message:


 Stack Trace:
 java.lang.NullPointerException
 at 
 __randomizedtesting.SeedInfo.seed([D7D02A10D3449D7C:25733850193B822A]:0)
 at 
 org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer$NormMap.getOrd(Lucene49NormsConsumer.java:249)
 at 
 org.apache.lucene.codecs.lucene49.Lucene49NormsConsumer.addNumericField(Lucene49NormsConsumer.java:150)
 at 
 org.apache.lucene.index.NumericDocValuesWriter.flush(NumericDocValuesWriter.java:92)
 at 
 org.apache.lucene.index.DefaultIndexingChain.writeNorms(DefaultIndexingChain.java:190)
 at 
 org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:94)
 at 
 org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:441)
 at 
 org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:510)
 at 
 org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:621)
 at org.apache.lucene.index.IndexWriter.doFlush(IndexWriter.java:3051)
 at org.apache.lucene.index.IndexWriter.flush(IndexWriter.java:3027)
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1666)
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1642)
 at 
 org.apache.lucene.index.BaseIndexFileFormatTestCase.testRamBytesUsed(BaseIndexFileFormatTestCase.java:222)
 at 
 org.apache.lucene.index.BasePostingsFormatTestCase.testRamBytesUsed(BasePostingsFormatTestCase.java:94)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
 at java.lang.reflect.Method.invoke(Method.java:619)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 

[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-08-12 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093929#comment-14093929
 ] 

Uwe Schindler commented on LUCENE-5879:
---

In fact, as far as I see: NumericRangeQuery with that would then just be a 
standard TermRangeQuery with a special binary upper/lower term?

Otherwise very cool!

 Add auto-prefix terms to block tree terms dict
 --

 Key: LUCENE-5879
 URL: https://issues.apache.org/jira/browse/LUCENE-5879
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5879.patch, LUCENE-5879.patch


 This cool idea to generalize numeric/trie fields came from Adrien:
 Today, when we index a numeric field (LongField, etc.) we pre-compute
 (via NumericTokenStream) outside of indexer/codec which prefix terms
 should be indexed.
 But this can be inefficient: you set a static precisionStep, and
 always add those prefix terms regardless of how the terms in the field
 are actually distributed.  Yet typically in real world applications
 the terms have a non-random distribution.
 So, it should be better if instead the terms dict decides where it
 makes sense to insert prefix terms, based on how dense the terms are
 in each region of term space.
 This way we can speed up query time for both term (e.g. infix
 suggester) and numeric ranges, and it should let us use less index
 space and get faster range queries.
  
 This would also mean that min/maxTerm for a numeric field would now be
 correct, vs today where the externally computed prefix terms are
 placed after the full precision terms, causing hairy code like
 NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
 feasible.
 The terms dict can also do tricks not possible if you must live on top
 of its APIs, e.g. to handle the adversary/over-constrained case when a
 given prefix has too many terms following it but finer prefixes
 have too few (what block tree calls floor term blocks).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6199) SolrJ, using SolrInputDocument methods, requires entire document to be loaded into memory

2014-08-12 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093930#comment-14093930
 ] 

Karl Wright commented on SOLR-6199:
---

While I've closed the CONNECTORS-981 ticket, we continue to hope for a solution 
to this one.  Many ManifoldCF users cannot use this Solr option and continue to 
use the extracting update handler instead because of the memory issue.


 SolrJ, using SolrInputDocument methods, requires entire document to be loaded 
 into memory
 -

 Key: SOLR-6199
 URL: https://issues.apache.org/jira/browse/SOLR-6199
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.7.3
Reporter: Karl Wright

 ManifoldCF has historically used Solr's extracting update handler for 
 transmitting binary documents to Solr.  Recently, we've included Tika 
 processing of binary documents, and wanted instead to send an (unlimited by 
 ManifoldCF) character stream as a primary content field to Solr instead.  
 Unfortunately, it appears that the SolrInputDocument metaphor for receiving 
 extracted content and metadata requires that all fields be completely 
 converted to String objects.  This will cause ManifoldCF to certainly run out 
 of memory at some point, when multiple ManifoldCF threads all try to convert 
 large documents to in-memory strings at the same time.
 I looked into what would be needed to add streaming support to UpdateRequest 
 and SolrInputDocument.  Basically, a legal option would be to set a field 
 value that would be a Reader or a Reader[].  It would be straightforward to 
 implement this, EXCEPT for the fact that SolrCloud apparently makes 
 UpdateRequest copies, and copying a Reader isn't going to work unless there's 
 a backing solid object somewhere.  Even then, I could have gotten this to 
 work by using a temporary file for large streams, but there's no signal from 
 SolrCloud when it is done with its copies of UpdateRequest, so there's no 
 place to free any backing storage.
 If anyone knows a good way to do non-extracting updates without loading 
 entire documents into memory, please let me know.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6199) SolrJ, using SolrInputDocument methods, requires entire document to be loaded into memory

2014-08-12 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093931#comment-14093931
 ] 

Karl Wright commented on SOLR-6199:
---

I am willing to develop a patch, but before I do, I need advice/encouragement 
from Mark Miller.  Otherwise, what I do is likely to be a waste of time.


 SolrJ, using SolrInputDocument methods, requires entire document to be loaded 
 into memory
 -

 Key: SOLR-6199
 URL: https://issues.apache.org/jira/browse/SOLR-6199
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 4.7.3
Reporter: Karl Wright

 ManifoldCF has historically used Solr's extracting update handler for 
 transmitting binary documents to Solr.  Recently, we've included Tika 
 processing of binary documents, and wanted instead to send an (unlimited by 
 ManifoldCF) character stream as a primary content field to Solr instead.  
 Unfortunately, it appears that the SolrInputDocument metaphor for receiving 
 extracted content and metadata requires that all fields be completely 
 converted to String objects.  This will cause ManifoldCF to certainly run out 
 of memory at some point, when multiple ManifoldCF threads all try to convert 
 large documents to in-memory strings at the same time.
 I looked into what would be needed to add streaming support to UpdateRequest 
 and SolrInputDocument.  Basically, a legal option would be to set a field 
 value that would be a Reader or a Reader[].  It would be straightforward to 
 implement this, EXCEPT for the fact that SolrCloud apparently makes 
 UpdateRequest copies, and copying a Reader isn't going to work unless there's 
 a backing solid object somewhere.  Even then, I could have gotten this to 
 work by using a temporary file for large streams, but there's no signal from 
 SolrCloud when it is done with its copies of UpdateRequest, so there's no 
 place to free any backing storage.
 If anyone knows a good way to do non-extracting updates without loading 
 entire documents into memory, please let me know.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Early Access build for JDK 9 b26 is available on java.net

2014-08-12 Thread Rory O'Donnell Oracle, Dublin Ireland

Hi Uwe,Dawid,

Early Access build for JDK 9 b26 https://jdk9.java.net/download/  is 
available on java.net, can you confirm fix for

JDK-8042589 which was in b25.

Summary of changes 
http://download.java.net/jdk9/changes/jdk9-b26.html?q=download/jdk9/changes/jdk9-b26.html 
in JDK 9 Build 26


Early Access Build Test Results 
http://download.java.net/openjdk/testresults/9/testresults.html


Rgds, Rory

--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



[jira] [Updated] (SOLR-5178) Admin UI - Memory Graph on Dashboard shows NaN for unused Swap

2014-08-12 Thread Dmitry Kan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dmitry Kan updated SOLR-5178:
-

Attachment: SOLR-5178.patch

a patch for solr 4.6.0. It adds a check for when both free swap and total swap 
are 0 (dividing one by another will give NaN).

 Admin UI - Memory Graph on Dashboard shows NaN for unused Swap
 --

 Key: SOLR-5178
 URL: https://issues.apache.org/jira/browse/SOLR-5178
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 4.3, 4.4
Reporter: Stefan Matheis (steffkes)
Assignee: Stefan Matheis (steffkes)
Priority: Minor
 Fix For: 4.9, 5.0

 Attachments: SOLR-5178.patch, screenshot-vladimir.jpeg


 If the System doesn't use Swap, the displayed memory graph on the dashboard 
 shows {{NaN}} (not a number) because it tries to divide by zero.
 {code}system:{
   name:Linux,
   version:3.2.0-39-virtual,
   arch:amd64,
   systemLoadAverage:3.38,
   committedVirtualMemorySize:32454287360,
   freePhysicalMemorySize:912945152,
   freeSwapSpaceSize:0,
   processCpuTime:5627465000,
   totalPhysicalMemorySize:71881908224,
   totalSwapSpaceSize:0,
   openFileDescriptorCount:350,
   maxFileDescriptorCount:4096,
   uname: Linux ip-xxx-xxx-xxx-xxx 3.2.0-39-virtual #62-Ubuntu SMP Thu 
 Feb 28 00:48:27 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux\n,
   uptime: 11:24:39 up 4 days, 23:03, 1 user, load average: 3.38, 3.10, 
 2.95\n
 }{code}
 We should add an additional check for that.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6312) CloudSolrServer doesn't honor updatesToLeaders constructor argument

2014-08-12 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14093983#comment-14093983
 ] 

Steve Davids commented on SOLR-6312:


Yes, I understand that all updates will always go to the leader, the CPU 
intensive task in this entire process is running extraction logic using XPaths 
in the update processor chain before any requests are distributed to the 
leader/replicas. When the request is distributed to the leader, the leader 
doesn't need to start the update processor from scratch, instead it continues 
where the other machine left off in the processing pipeline at the 
DistributedUpdateProcessor. So if I am able to load balance requests to all 
replicas the CPU intensive tasks (early update processors) will be shared by 
multiple machines not just the leader and should result in increased throughput.

 CloudSolrServer doesn't honor updatesToLeaders constructor argument
 ---

 Key: SOLR-6312
 URL: https://issues.apache.org/jira/browse/SOLR-6312
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
 Fix For: 4.10


 The CloudSolrServer doesn't use the updatesToLeaders property - all SolrJ 
 requests are being sent to the shard leaders.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6343) add a new end point /json/raw to index json for full text search

2014-08-12 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6343:
-

Description: I should be able to just put in any random json to an index to 
the end point {{/json/text}} and all the terms can just get indexed to the 
default search field. There should be a way to store the entire JSON as well by 
adding an extra param {{target=fieldName}} and it can store the whole payload 
to that field  (was: I should be able to just put in any random json to an 
index to the end point {{/json/raw}} and all the terms can just get indexed to 
the default search field. There should be a way to store the entire JSON as 
well by adding an extra param {{target=fieldName}} and it can store the whole 
payload to that field)

 add a new end point /json/raw to index json for full text search 
 -

 Key: SOLR-6343
 URL: https://issues.apache.org/jira/browse/SOLR-6343
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul
Assignee: Noble Paul

 I should be able to just put in any random json to an index to the end point 
 {{/json/text}} and all the terms can just get indexed to the default search 
 field. There should be a way to store the entire JSON as well by adding an 
 extra param {{target=fieldName}} and it can store the whole payload to that 
 field



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6343) add a new end point /update/json/text to index json for full text search

2014-08-12 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6343?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6343:
-

Summary: add a new end point /update/json/text to index json for full text 
search   (was: add a new end point /json/raw to index json for full text search 
)

 add a new end point /update/json/text to index json for full text search 
 -

 Key: SOLR-6343
 URL: https://issues.apache.org/jira/browse/SOLR-6343
 Project: Solr
  Issue Type: New Feature
Reporter: Noble Paul
Assignee: Noble Paul

 I should be able to just put in any random json to an index to the end point 
 {{/json/text}} and all the terms can just get indexed to the default search 
 field. There should be a way to store the entire JSON as well by adding an 
 extra param {{target=fieldName}} and it can store the whole payload to that 
 field



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6325) Expose per-collection and per-shard aggregate statistics

2014-08-12 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094029#comment-14094029
 ] 

Shawn Heisey commented on SOLR-6325:


Since this gets the index size, you may want to be aware of SOLR-3990.


 Expose per-collection and per-shard aggregate statistics
 

 Key: SOLR-6325
 URL: https://issues.apache.org/jira/browse/SOLR-6325
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
 Fix For: 4.9, 5.0

 Attachments: SOLR-6325.patch, SOLR-6325.patch, SOLR-6325.patch, 
 SOLR-6325.patch


 SolrCloud doesn't provide any aggregate stats about the cluster or a 
 collection. Very common questions such as document counts per shard, index 
 sizes, request rates etc cannot be answered easily without figuring out the 
 cluster state, invoking multiple core admin APIs and aggregating them 
 manually.
 I propose that we expose an API which returns each of the following on a 
 per-collection and per-shard basis:
 # Document counts
 # Index size on disk
 # Query request rate
 # Indexing request rate
 # Real time get request rate
 I am not yet sure if this should be a distributed search component or a 
 collection API.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.7.0_65) - Build # 10888 - Failure!

2014-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10888/
Java: 32bit/jdk1.7.0_65 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  
org.apache.solr.morphlines.cell.SolrCellMorphlineTest.testSolrCellDocumentTypes

Error Message:
key:ignored_creation_date expected:[2011-09-02T10:11:00Z] but 
was:[२०११-०९-०२T१०:११:००Z]

Stack Trace:
java.lang.AssertionError: key:ignored_creation_date 
expected:[2011-09-02T10:11:00Z] but was:[२०११-०९-०२T१०:११:००Z]
at 
__randomizedtesting.SeedInfo.seed([74D870BEBC6790ED:EE1BD320D9BCCE38]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.morphlines.solr.AbstractSolrMorphlineTestBase.testDocumentTypesInternal(AbstractSolrMorphlineTestBase.java:170)
at 
org.apache.solr.morphlines.cell.SolrCellMorphlineTest.testSolrCellDocumentTypes(SolrCellMorphlineTest.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (SOLR-3274) ZooKeeper related SolrCloud problems

2014-08-12 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094033#comment-14094033
 ] 

Per Steffensen commented on SOLR-3274:
--

bq. Both nodes have 16 CPU cores, 48G of memory and RAID 10 (SSD), I thought it 
would be hard to get performance issues there

Yes that should be hard. Well done! :-)

bq. Anyway, adding a separate node with 4th zookeeper instance might help, 
right?

A ZK cluster should always have an uneven number of nodes. So if you want to 
add additional ZK instances you should add two. I would rather move the two ZK 
instances running on Solr-machines to two machines not running Solr. So that 
you end up with 3 ZK instances where non of them run on machines also running 
Solr. We never run ZK on the same machines as Solr - we have bad experiences 
with that - loosing ZK connections all the time. You will still occasionally 
loose ZK connections from Solrs when they are under high load, but usually they 
reconnect fairly quickly (before session timeout) and you can continue 
immediately.

I have been working on an optimized ZK where you do not loose ZK connections 
nearly as often, but currently it is not prioritized to finish the job.

 ZooKeeper related SolrCloud problems
 

 Key: SOLR-3274
 URL: https://issues.apache.org/jira/browse/SOLR-3274
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-ALPHA
 Environment: Any
Reporter: Per Steffensen

 Same setup as in SOLR-3273. Well if I have to tell the entire truth we have 7 
 Solr servers, running 28 slices of the same collection (collA) - all slices 
 have one replica (two shards all in all - leader + replica) - 56 cores all in 
 all (8 shards on each solr instance). But anyways...
 Besides the problem reported in SOLR-3273, the system seems to run fine under 
 high load for several hours, but eventually errors like the ones shown below 
 start to occur. I might be wrong, but they all seem to indicate some kind of 
 unstability in the collaboration between Solr and ZooKeeper. I have to say 
 that I havnt been there to check ZooKeeper at the moment where those 
 exception occur, but basically I dont believe the exceptions occur because 
 ZooKeeper is not running stable - at least when I go and check ZooKeeper 
 through other channels (e.g. my eclipse ZK plugin) it is always accepting 
 my connection and generally seems to be doing fine.
 Exception 1) Often the first error we see in solr.log is something like this
 {code}
 Mar 22, 2012 5:06:43 AM org.apache.solr.common.SolrException log
 SEVERE: org.apache.solr.common.SolrException: Cannot talk to ZooKeeper - 
 Updates are disabled.
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.zkCheck(DistributedUpdateProcessor.java:678)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:250)
 at org.apache.solr.handler.XMLLoader.processUpdate(XMLLoader.java:140)
 at org.apache.solr.handler.XMLLoader.load(XMLLoader.java:80)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:59)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:129)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1540)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:407)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:256)
 at 
 org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
 at 
 org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
 at 
 org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
 at 
 org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
 at 
 org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
 at 
 org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
 at 
 org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
 at 
 org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114)
 at 
 org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
 at org.mortbay.jetty.Server.handle(Server.java:326)
 at 
 org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
 at 
 org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:945)
 at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:756)
 at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218)
 at 

[jira] [Commented] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-08-12 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094076#comment-14094076
 ] 

Michael McCandless commented on LUCENE-5879:


bq. In fact, as far as I see: NumericRangeQuery with that would then just be a 
standard TermRangeQuery with a special binary upper/lower term?

I think so!  We should be able to use NumericUtils.XToSortableInt/Long I think?

But that's phase 2 here... it's hard enough just getting these terms working 
low-level...

 Add auto-prefix terms to block tree terms dict
 --

 Key: LUCENE-5879
 URL: https://issues.apache.org/jira/browse/LUCENE-5879
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5879.patch, LUCENE-5879.patch


 This cool idea to generalize numeric/trie fields came from Adrien:
 Today, when we index a numeric field (LongField, etc.) we pre-compute
 (via NumericTokenStream) outside of indexer/codec which prefix terms
 should be indexed.
 But this can be inefficient: you set a static precisionStep, and
 always add those prefix terms regardless of how the terms in the field
 are actually distributed.  Yet typically in real world applications
 the terms have a non-random distribution.
 So, it should be better if instead the terms dict decides where it
 makes sense to insert prefix terms, based on how dense the terms are
 in each region of term space.
 This way we can speed up query time for both term (e.g. infix
 suggester) and numeric ranges, and it should let us use less index
 space and get faster range queries.
  
 This would also mean that min/maxTerm for a numeric field would now be
 correct, vs today where the externally computed prefix terms are
 placed after the full precision terms, causing hairy code like
 NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
 feasible.
 The terms dict can also do tricks not possible if you must live on top
 of its APIs, e.g. to handle the adversary/over-constrained case when a
 given prefix has too many terms following it but finer prefixes
 have too few (what block tree calls floor term blocks).



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6291) RollingRestartTest is too slow.

2014-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094080#comment-14094080
 ] 

ASF subversion and git services commented on SOLR-6291:
---

Commit 1617482 from [~markrmil...@gmail.com] in branch 'dev/trunk'
[ https://svn.apache.org/r1617482 ]

SOLR-6291: RollingRestartTest is too slow.

 RollingRestartTest is too slow.
 ---

 Key: SOLR-6291
 URL: https://issues.apache.org/jira/browse/SOLR-6291
 Project: Solr
  Issue Type: Sub-task
Reporter: Mark Miller
 Attachments: SOLR-6291.patch


 I assume it's simply because shards is set to 16.
 Tests should use much lower shard counts and then boost them up for nightly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6291) RollingRestartTest is too slow.

2014-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094082#comment-14094082
 ] 

ASF subversion and git services commented on SOLR-6291:
---

Commit 1617483 from [~markrmil...@gmail.com] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1617483 ]

SOLR-6291: RollingRestartTest is too slow.

 RollingRestartTest is too slow.
 ---

 Key: SOLR-6291
 URL: https://issues.apache.org/jira/browse/SOLR-6291
 Project: Solr
  Issue Type: Sub-task
Reporter: Mark Miller
 Attachments: SOLR-6291.patch


 I assume it's simply because shards is set to 16.
 Tests should use much lower shard counts and then boost them up for nightly.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3920) CloudSolrServer doesn't allow to index multiple collections with one instance of server

2014-08-12 Thread Zaytsev Sergey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094096#comment-14094096
 ] 

Zaytsev Sergey commented on SOLR-3920:
--

Is there a way to do the same with REST method call running SolrCloud? In other 
words, to pass a collection_name into a URL for update, in a way like this:
http://localhost:8983/solr/MyCollectionName/update? bla-bla-bla.

Thank you very much!

 CloudSolrServer doesn't allow to index multiple collections with one instance 
 of server
 ---

 Key: SOLR-3920
 URL: https://issues.apache.org/jira/browse/SOLR-3920
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Grzegorz Sobczyk
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3920.patch


 With one instance of CloudSolrServer I can't add documents to multiple 
 collections, for example:
 {code}
 @Test
 public void shouldSendToSecondCore() throws Exception {
   //given
   try {
   CloudSolrServer server = new CloudSolrServer(localhost:9983);
   UpdateRequest commit1 = new UpdateRequest();
   commit1.setAction(ACTION.COMMIT, true, true);
   commit1.setParam(collection, collection1);
   //this commit is bug's cause
   commit1.process(server);
   
   SolrInputDocument doc = new SolrInputDocument();
   doc.addField(id, id);
   doc.addField(name, name);
   
   UpdateRequest update2 = new UpdateRequest();
   update2.setParam(collection, collection2);
   update2.add(doc);
   update2.process(server);
   
   UpdateRequest commit2 = new UpdateRequest();
   commit2.setAction(ACTION.COMMIT, true, true);
   commit2.setParam(collection, collection2);
   commit2.process(server);
   SolrQuery q1 = new SolrQuery(id:id);
   q1.set(collection, collection1);
   SolrQuery q2 = new SolrQuery(id:id);
   q2.set(collection, collection2);
   
   //when
   QueryResponse resp1 = server.query(q1);
   QueryResponse resp2 = server.query(q2);
   
   //then
   Assert.assertEquals(0L, resp1.getResults().getNumFound());
   Assert.assertEquals(1L, resp2.getResults().getNumFound());
   } finally {
   CloudSolrServer server1 = new CloudSolrServer(localhost:9983);
   server1.setDefaultCollection(collection1);
   server1.deleteByQuery(id:id);
   server1.commit(true, true);
   
   CloudSolrServer server2 = new CloudSolrServer(localhost:9983);
   server2.setDefaultCollection(collection2);
   server2.deleteByQuery(id:id);
   server2.commit(true, true);
   }
 }
 {code}
 Second update goes to first collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5864) Split BytesRef into BytesRef and BytesRefBuilder

2014-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094115#comment-14094115
 ] 

ASF subversion and git services commented on LUCENE-5864:
-

Commit 1617493 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1617493 ]

LUCENE-5864: Split BytesRef into BytesRef and BytesRefBuilder.

 Split BytesRef into BytesRef and BytesRefBuilder
 

 Key: LUCENE-5864
 URL: https://issues.apache.org/jira/browse/LUCENE-5864
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
 Fix For: 4.10

 Attachments: LUCENE-5864.patch, LUCENE-5864.patch, LUCENE-5864.patch


 Follow-up of LUCENE-5836.
 The fact that BytesRef (and CharsRef, IntsRef, LongsRef) can be used as 
 either pointers to a section of a byte[] or as buffers raises issues. The 
 idea would be to keep BytesRef but remove all the buffer methods like 
 copyBytes, grow, etc. and add a new class BytesRefBuilder that wraps a byte[] 
 and a length (but no offset), has grow/copyBytes/copyChars methods and the 
 ability to build BytesRef instances.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-4464) DIH - Processed documents counter resets to zero after first database request

2014-08-12 Thread Thomas Champagne (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-4464?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094120#comment-14094120
 ] 

Thomas Champagne commented on SOLR-4464:


In solr 4.9, this problem is due to the line 230 in the DocBuilder class : 
{code:java|title=DocBuilder.java at line 230|borderStyle=solid}
statusMessages.remove(DataImporter.MSG.TOTAL_DOC_PROCESSED);
{code}
After each entity that is processed, the status message about document 
processed is removed. I don't understand why.

 DIH - Processed documents counter resets to zero after first database request
 -

 Key: SOLR-4464
 URL: https://issues.apache.org/jira/browse/SOLR-4464
 Project: Solr
  Issue Type: Bug
  Components: contrib - DataImportHandler
Affects Versions: 4.1
 Environment: CentOS 6.3 x64 / apache-tomcat-7.0.35 / 
 mysql-connector-java-5.1.23 - Large machine 5TB of drives and 280GB RAM - 
 Java Heap set to 250Gb - resources are not an issue.
Reporter: Dave Cook
Assignee: Shalin Shekhar Mangar
Priority: Minor
  Labels: patch
 Attachments: 20130921solrzerocounter.png, 20130921solrzerocounter2.png


 [11:20] quasimotoca Solr 4.1 - Processed documents resets to 0 after 
 processing my first entity - all database schemas are identical
 [11:21] quasimotoca However, all the documents get fetched and I can query 
 the results no problem.  
 Here's a link to a screenshot - http://findocs/gridworkz.com/solr 
 Everything works perfect except the screen doesn't increment the Processed 
 counter on subsequent database Requests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6312) CloudSolrServer doesn't honor updatesToLeaders constructor argument

2014-08-12 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094170#comment-14094170
 ] 

Erick Erickson commented on SOLR-6312:
--

Hmmm, interesting problem here. This is why, for scaling purposes, I vastly
prefer doing any such heavy lifting on the clients so I can scale up by racking
N clients together rather than have a Solr node be a bottleneck due to the
parsing. Is that a possibility?

So I suspect we can close this JIRA? You're correct that updatesToLeaders
is not respected, but it's also not going to be. Or perhaps change the title
to depecrate CloudSolrServer updatesToLeaders constructor argument.



 CloudSolrServer doesn't honor updatesToLeaders constructor argument
 ---

 Key: SOLR-6312
 URL: https://issues.apache.org/jira/browse/SOLR-6312
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
 Fix For: 4.10


 The CloudSolrServer doesn't use the updatesToLeaders property - all SolrJ 
 requests are being sent to the shard leaders.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094173#comment-14094173
 ] 

Hoss Man commented on SOLR-6365:


bq. {{params path=/some-other-path* defaults=a=bc=de=f invariants=x=y 
appends=i=j/}}

that's not even valid XML (bare {{}})

and what does it even mean to say that you want to set some defaults and 
invariants on {{/some-other-path*}} if you don't configure any type of 
information about what handler {{/some-other-path*}} uses?

how would this kind of syntax help with ...we can avoid specifying the 
components altogether and make solrconfig much simpler. ?



 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 !-- use json for all paths and _txt as the default search field--
 params path=/** defaults=wt=jsondf=_txt /
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3920) CloudSolrServer doesn't allow to index multiple collections with one instance of server

2014-08-12 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094177#comment-14094177
 ] 

Erick Erickson commented on SOLR-3920:
--

Please raise questions like this on the Solr user's list, the JIRAs are
for discussing code changes rather than answering user-level
questions.

Especially please refrain from adding to closed JIRAs.

Of course you can do what you ask. You've used the correct 
syntax already. Did you even try it before posting the question?

 CloudSolrServer doesn't allow to index multiple collections with one instance 
 of server
 ---

 Key: SOLR-3920
 URL: https://issues.apache.org/jira/browse/SOLR-3920
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Grzegorz Sobczyk
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3920.patch


 With one instance of CloudSolrServer I can't add documents to multiple 
 collections, for example:
 {code}
 @Test
 public void shouldSendToSecondCore() throws Exception {
   //given
   try {
   CloudSolrServer server = new CloudSolrServer(localhost:9983);
   UpdateRequest commit1 = new UpdateRequest();
   commit1.setAction(ACTION.COMMIT, true, true);
   commit1.setParam(collection, collection1);
   //this commit is bug's cause
   commit1.process(server);
   
   SolrInputDocument doc = new SolrInputDocument();
   doc.addField(id, id);
   doc.addField(name, name);
   
   UpdateRequest update2 = new UpdateRequest();
   update2.setParam(collection, collection2);
   update2.add(doc);
   update2.process(server);
   
   UpdateRequest commit2 = new UpdateRequest();
   commit2.setAction(ACTION.COMMIT, true, true);
   commit2.setParam(collection, collection2);
   commit2.process(server);
   SolrQuery q1 = new SolrQuery(id:id);
   q1.set(collection, collection1);
   SolrQuery q2 = new SolrQuery(id:id);
   q2.set(collection, collection2);
   
   //when
   QueryResponse resp1 = server.query(q1);
   QueryResponse resp2 = server.query(q2);
   
   //then
   Assert.assertEquals(0L, resp1.getResults().getNumFound());
   Assert.assertEquals(1L, resp2.getResults().getNumFound());
   } finally {
   CloudSolrServer server1 = new CloudSolrServer(localhost:9983);
   server1.setDefaultCollection(collection1);
   server1.deleteByQuery(id:id);
   server1.commit(true, true);
   
   CloudSolrServer server2 = new CloudSolrServer(localhost:9983);
   server2.setDefaultCollection(collection2);
   server2.deleteByQuery(id:id);
   server2.commit(true, true);
   }
 }
 {code}
 Second update goes to first collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6312) CloudSolrServer doesn't honor updatesToLeaders constructor argument

2014-08-12 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094182#comment-14094182
 ] 

Hoss Man commented on SOLR-6312:


bq. So I suspect we can close this JIRA? You're correct that updatesToLeaders 
is not respected, but it's also not going to be.

Steve's use case (and similar usecases like it, ie: using the 
ExtractingRequestHandler on large binary data files) actually strikes me as a 
really good reason to make upatesToLeaders==false meaningful again: randomize 
updates to all up replicas in the collection regardless of leader status.  
(the default is 
upatesToLeaders=true, no reason that would change, no reason it would impact 
anyone except people like steve trying to distribute the load of early logic to 
non-leaders)

 CloudSolrServer doesn't honor updatesToLeaders constructor argument
 ---

 Key: SOLR-6312
 URL: https://issues.apache.org/jira/browse/SOLR-6312
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
 Fix For: 4.10


 The CloudSolrServer doesn't use the updatesToLeaders property - all SolrJ 
 requests are being sent to the shard leaders.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094181#comment-14094181
 ] 

Erick Erickson commented on SOLR-6365:
--

I'm a bit puzzled too at what the point is here. From a sysadmin's standpoint,
this would move all the configuration (which is vitally important to me) to
some scattered code that lives on, like, people's personal laptops, a nightmare
to administer.

So I guess you're thinking of some higher-level problem that this is part of,
what is that problem? A REST API for solrconfig?

 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 !-- use json for all paths and _txt as the default search field--
 params path=/** defaults=wt=jsondf=_txt /
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094225#comment-14094225
 ] 

Noble Paul commented on SOLR-6365:
--

bq.that's not even valid XML (bare )
yeah, you are right, according to xml standards it is not. But all parsers 
accept that  . But that is besides the point

bq.and what does it even mean to say that you want to set some defaults and 
invariants on {{/some-other-path/*}} if you don't configure any type of 
information about what handler {{/some-other-path/}}* uses?

Yes, Looking from a user's point of view. They don't really think about the 
solr components. They assume that a given path , say {{/update}}, has certain 
capabilities and accepts certain parameters . For them it is not a component , 
it is just an API end point.  Yes, you can of course specify wrong parameters 
which you are free to do even now.  I'm not saying we will take away all 
configuration from solrconfig.xml . It is mainly for the fixed paths. 

The other use case this addresses is our REST APIs.  It is managed completely 
outside of solrconfig.xml and there is no way to specify params . 

bq.how would this kind of syntax help with ...we can avoid specifying the 
components altogether and make solrconfig much simpler. ?

I'm thinking of fixing certain paths and avoiding certain common definitions in 
the xml file. We should make it fixed saying that certain paths and their 
parameters are fixed say {{/select}} , {{/update}}, {{/admin/*}} etc. All I 
should be able to do is set params 

In the current design it is impossible to have global level configurations 
which spans multiple components , say {{wt==json}} for all paths. 

 bq.So I guess you're thinking of some higher-level problem that this is part 
of, what is that problem? A REST API for solrconfig?

Yes, you are right , this issue is not addressing that use case, But it becomes 
much simpler to provide an API to modify params than the entire components. 
Most often the usecase is about changing the params


 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 !-- use json for all paths and _txt as the default search field--
 params path=/** defaults=wt=jsondf=_txt /
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094225#comment-14094225
 ] 

Noble Paul edited comment on SOLR-6365 at 8/12/14 4:06 PM:
---

bq.that's not even valid XML (bare )
yeah, you are right, according to xml standards it is not. But all parsers 
accept that  . But that is besides the point

bq.and what does it even mean to say that you want to set some defaults and 
invariants on {{/some-other-path/*}}  if you don't configure any type of 
information about what handler {{/some-other-path/}} uses?

Yes, Looking from a user's point of view. They don't really think about the 
solr components. They assume that a given path , say {{/update}}, has certain 
capabilities and accepts certain parameters . For them it is not a component , 
it is just an API end point.  Yes, you can of course specify wrong parameters 
which you are free to do even now.  I'm not saying we will take away all 
configuration from solrconfig.xml . It is mainly for the fixed paths. 

The other use case this addresses is our REST APIs.  It is managed completely 
outside of solrconfig.xml and there is no way to specify params . 

bq.how would this kind of syntax help with ...we can avoid specifying the 
components altogether and make solrconfig much simpler. ?

I'm thinking of fixing certain paths and avoiding certain common definitions in 
the xml file. We should make it fixed saying that certain paths and their 
parameters are fixed say {{/select}} , {{/update}}, {{/admin/*}} etc. All I 
should be able to do is set params 

In the current design it is impossible to have global level configurations 
which spans multiple components , say {{wt=json}} or {{df=text}} for all paths. 

 bq.So I guess you're thinking of some higher-level problem that this is part 
of, what is that problem? A REST API for solrconfig?

Yes, you are right , this issue is not addressing that use case, But it becomes 
much simpler to provide an API to modify params than the entire components. 
Most often the usecase is about changing the params



was (Author: noble.paul):
bq.that's not even valid XML (bare )
yeah, you are right, according to xml standards it is not. But all parsers 
accept that  . But that is besides the point

bq.and what does it even mean to say that you want to set some defaults and 
invariants on {{/some-other-path/*}} if you don't configure any type of 
information about what handler {{/some-other-path/}}* uses?

Yes, Looking from a user's point of view. They don't really think about the 
solr components. They assume that a given path , say {{/update}}, has certain 
capabilities and accepts certain parameters . For them it is not a component , 
it is just an API end point.  Yes, you can of course specify wrong parameters 
which you are free to do even now.  I'm not saying we will take away all 
configuration from solrconfig.xml . It is mainly for the fixed paths. 

The other use case this addresses is our REST APIs.  It is managed completely 
outside of solrconfig.xml and there is no way to specify params . 

bq.how would this kind of syntax help with ...we can avoid specifying the 
components altogether and make solrconfig much simpler. ?

I'm thinking of fixing certain paths and avoiding certain common definitions in 
the xml file. We should make it fixed saying that certain paths and their 
parameters are fixed say {{/select}} , {{/update}}, {{/admin/*}} etc. All I 
should be able to do is set params 

In the current design it is impossible to have global level configurations 
which spans multiple components , say {{wt==json}} for all paths. 

 bq.So I guess you're thinking of some higher-level problem that this is part 
of, what is that problem? A REST API for solrconfig?

Yes, you are right , this issue is not addressing that use case, But it becomes 
much simpler to provide an API to modify params than the entire components. 
Most often the usecase is about changing the params


 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 

[jira] [Updated] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6365:
-

Description: 
The components are configured in solrconfig.xml mostly for specifying these 
extra parameters. If we separate these out, we can avoid specifying the 
components altogether and make solrconfig much simpler. Eventually we want 
users to see all funtions as paths instead of components and control these 
params from outside , through an API and persisted in ZK

example
{code:xml}
 !-- these are top level tags not specified inside any components --
params  path=/dataimport defaults=config=data-config.xml/
params path=/update/* defaults=wt=json/
params path=/some-other-path/* defaults=a=bc=de=f invariants=x=y 
appends=i=j/
!-- use json for all paths and _txt as the default search field--
params path=/** defaults=wt=jsondf=_txt /
{code}
The idea is to use the parameters in the  same format as we pass in the http 
request and eliminate specifying our default components in solrconfig.xml

 

  was:
The components are configured in solrconfig.xml mostly for specifying these 
extra parameters. If we separate these out, we can avoid specifying the 
components altogether and make solrconfig much simpler. Eventually we want 
users to see all funtions as paths instead of components and control these 
params from outside , through an API and persisted in ZK

example
{code:xml}
 !-- these are top level tags not specified inside any components --
params  path=/dataimport defaults=config=data-config.xml/
params path=/update/* defaults=wt=json/
params path=/some-other-path* defaults=a=bc=de=f invariants=x=y 
appends=i=j/
!-- use json for all paths and _txt as the default search field--
params path=/** defaults=wt=jsondf=_txt /
{code}
The idea is to use the parameters in the  same format as we pass in the http 
request and eliminate specifying our default components in solrconfig.xml

 


 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path/* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 !-- use json for all paths and _txt as the default search field--
 params path=/** defaults=wt=jsondf=_txt /
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094225#comment-14094225
 ] 

Noble Paul edited comment on SOLR-6365 at 8/12/14 4:07 PM:
---

bq.that's not even valid XML (bare )
yeah, you are right, according to xml standards it is not. But all parsers 
accept that  . But that is besides the point

bq.and what does it even mean to say that you want to set some defaults and 
invariants on {{/some-other-path/*}}  if you don't configure any type of 
information about what handler {{/some-other-path/}} uses?

Yes, Looking from a user's point of view. They don't really think about the 
solr components. They assume that a given path , say {{/update}}, has certain 
capabilities and accepts certain parameters . For them it is not a component , 
it is just an API end point.  Yes, you can of course specify wrong parameters 
which you are free to do even now.  I'm not saying we will take away all 
configuration from solrconfig.xml . It is mainly for the fixed paths. 

The other use case this addresses is our REST APIs.  It is managed completely 
outside of solrconfig.xml and there is no way to specify params . 

bq.how would this kind of syntax help with ...we can avoid specifying the 
components altogether and make solrconfig much simpler. ?

I'm thinking of fixing certain paths and avoiding certain common definitions in 
the xml file. We should make it fixed saying that certain paths and their 
parameters are fixed say {{/select}} , {{/update}}, {{/admin/*}} , {{/get}} 
etc. All I should be able to do is set params 

In the current design it is impossible to have global level configurations 
which spans multiple components , say {{wt=json}} or {{df=text}} for all paths. 

 bq.So I guess you're thinking of some higher-level problem that this is part 
of, what is that problem? A REST API for solrconfig?

Yes, you are right , this issue is not addressing that use case, But it becomes 
much simpler to provide an API to modify params than the entire components. 
Most often the usecase is about changing the params



was (Author: noble.paul):
bq.that's not even valid XML (bare )
yeah, you are right, according to xml standards it is not. But all parsers 
accept that  . But that is besides the point

bq.and what does it even mean to say that you want to set some defaults and 
invariants on {{/some-other-path/*}}  if you don't configure any type of 
information about what handler {{/some-other-path/}} uses?

Yes, Looking from a user's point of view. They don't really think about the 
solr components. They assume that a given path , say {{/update}}, has certain 
capabilities and accepts certain parameters . For them it is not a component , 
it is just an API end point.  Yes, you can of course specify wrong parameters 
which you are free to do even now.  I'm not saying we will take away all 
configuration from solrconfig.xml . It is mainly for the fixed paths. 

The other use case this addresses is our REST APIs.  It is managed completely 
outside of solrconfig.xml and there is no way to specify params . 

bq.how would this kind of syntax help with ...we can avoid specifying the 
components altogether and make solrconfig much simpler. ?

I'm thinking of fixing certain paths and avoiding certain common definitions in 
the xml file. We should make it fixed saying that certain paths and their 
parameters are fixed say {{/select}} , {{/update}}, {{/admin/*}} etc. All I 
should be able to do is set params 

In the current design it is impossible to have global level configurations 
which spans multiple components , say {{wt=json}} or {{df=text}} for all paths. 

 bq.So I guess you're thinking of some higher-level problem that this is part 
of, what is that problem? A REST API for solrconfig?

Yes, you are right , this issue is not addressing that use case, But it becomes 
much simpler to provide an API to modify params than the entire components. 
Most often the usecase is about changing the params


 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* 

[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0_20-ea-b23) - Build # 10889 - Still Failing!

2014-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10889/
Java: 64bit/jdk1.8.0_20-ea-b23 -XX:-UseCompressedOops -XX:+UseParallelGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([C7C54845AA7C4664]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:332)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:617)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:180)
at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

Error Message:
Clean up static fields (in @AfterClass?), your test seems to hang on to 
approximately 12,203,544 bytes (threshold is 10,485,760). Field reference sizes 
(counted individually):   - 13,468,640 bytes, protected static 
org.apache.solr.core.SolrConfig org.apache.solr.SolrTestCaseJ4.solrConfig   - 
12,915,064 bytes, protected static 
org.apache.solr.util.TestHarness$LocalRequestFactory 
org.apache.solr.SolrTestCaseJ4.lrf   - 12,914,688 bytes, protected static 
org.apache.solr.util.TestHarness org.apache.solr.SolrTestCaseJ4.h   - 448 
bytes, private static java.util.regex.Pattern 
org.apache.solr.SolrTestCaseJ4.nonEscapedSingleQuotePattern   - 312 bytes, 
private static java.util.regex.Pattern 
org.apache.solr.SolrTestCaseJ4.escapedSingleQuotePattern   - 296 bytes, public 
static org.junit.rules.TestRule org.apache.solr.SolrTestCaseJ4.solrClassRules   
- 264 bytes, public static java.io.File 
org.apache.solr.cloud.AbstractZkTestCase.SOLRHOME   - 216 bytes, protected 
static java.lang.String org.apache.solr.SolrTestCaseJ4.testSolrHome   - 144 
bytes, private static java.lang.String 
org.apache.solr.SolrTestCaseJ4.factoryProp   - 88 bytes, protected static 
java.lang.String org.apache.solr.SolrTestCaseJ4.configString   - 80 bytes, 
private static java.lang.String org.apache.solr.SolrTestCaseJ4.coreName   - 80 
bytes, protected static java.lang.String 
org.apache.solr.SolrTestCaseJ4.schemaString

Stack Trace:
junit.framework.AssertionFailedError: Clean up static fields (in @AfterClass?), 
your test seems to hang on to approximately 12,203,544 bytes (threshold is 
10,485,760). Field reference sizes (counted individually):
  - 13,468,640 bytes, protected static org.apache.solr.core.SolrConfig 
org.apache.solr.SolrTestCaseJ4.solrConfig
  - 12,915,064 bytes, protected static 
org.apache.solr.util.TestHarness$LocalRequestFactory 
org.apache.solr.SolrTestCaseJ4.lrf
  - 12,914,688 bytes, protected static 

[jira] [Commented] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094273#comment-14094273
 ] 

Shalin Shekhar Mangar commented on SOLR-6365:
-

I like this idea. We can also provide a way to name a certain 
defaults/appends/invariants combination such that people can just provide a 
name while querying. This will become more powerful when we build REST APIs for 
creating/modifying such named param-sets.

bq. From a sysadmin's standpoint, this would move all the configuration (which 
is vitally important to me) to some scattered code that lives on, like, 
people's personal laptops, a nightmare to administer.

I didn't get that impression from reading the description. What makes you say 
that?

 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path/* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 !-- use json for all paths and _txt as the default search field--
 params path=/** defaults=wt=jsondf=_txt /
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094276#comment-14094276
 ] 

Noble Paul commented on SOLR-6365:
--

bq. We can also provide a way to name a certain defaults/appends/invariants 
combination 

I like that idea, naming a bunch of params and using it as a reference in 
queries

 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path/* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 !-- use json for all paths and _txt as the default search field--
 params path=/** defaults=wt=jsondf=_txt /
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-08-12 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094318#comment-14094318
 ] 

Noble Paul commented on SOLR-5473:
--

bq I don't mind that as an expert, unsupported override or something, but by 
and large I think this should be a system wide config, similar to legacyMode

+1

 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: SolrCloud
 Fix For: 5.0, 4.10

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473_no_ui.patch, SOLR-5473_undo.patch, 
 ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node and watches state changes 
 selectively.
 https://reviews.apache.org/r/24220/



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3920) CloudSolrServer doesn't allow to index multiple collections with one instance of server

2014-08-12 Thread Zaytsev Sergey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094376#comment-14094376
 ] 

Zaytsev Sergey commented on SOLR-3920:
--

Sorry, I will, definitely.

And for the subject - of course I did but got no success... That exactly the 
reason I asked the question...

Could you please provide the correct syntax ( URL ) to update specific 
collection ? I'll close this issue instantly then.

 CloudSolrServer doesn't allow to index multiple collections with one instance 
 of server
 ---

 Key: SOLR-3920
 URL: https://issues.apache.org/jira/browse/SOLR-3920
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Grzegorz Sobczyk
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3920.patch


 With one instance of CloudSolrServer I can't add documents to multiple 
 collections, for example:
 {code}
 @Test
 public void shouldSendToSecondCore() throws Exception {
   //given
   try {
   CloudSolrServer server = new CloudSolrServer(localhost:9983);
   UpdateRequest commit1 = new UpdateRequest();
   commit1.setAction(ACTION.COMMIT, true, true);
   commit1.setParam(collection, collection1);
   //this commit is bug's cause
   commit1.process(server);
   
   SolrInputDocument doc = new SolrInputDocument();
   doc.addField(id, id);
   doc.addField(name, name);
   
   UpdateRequest update2 = new UpdateRequest();
   update2.setParam(collection, collection2);
   update2.add(doc);
   update2.process(server);
   
   UpdateRequest commit2 = new UpdateRequest();
   commit2.setAction(ACTION.COMMIT, true, true);
   commit2.setParam(collection, collection2);
   commit2.process(server);
   SolrQuery q1 = new SolrQuery(id:id);
   q1.set(collection, collection1);
   SolrQuery q2 = new SolrQuery(id:id);
   q2.set(collection, collection2);
   
   //when
   QueryResponse resp1 = server.query(q1);
   QueryResponse resp2 = server.query(q2);
   
   //then
   Assert.assertEquals(0L, resp1.getResults().getNumFound());
   Assert.assertEquals(1L, resp2.getResults().getNumFound());
   } finally {
   CloudSolrServer server1 = new CloudSolrServer(localhost:9983);
   server1.setDefaultCollection(collection1);
   server1.deleteByQuery(id:id);
   server1.commit(true, true);
   
   CloudSolrServer server2 = new CloudSolrServer(localhost:9983);
   server2.setDefaultCollection(collection2);
   server2.deleteByQuery(id:id);
   server2.commit(true, true);
   }
 }
 {code}
 Second update goes to first collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: SolrCloud on HDFS empty tlog hence doesn't replay after Solr process crash and restart

2014-08-12 Thread Chris Hostetter

Tom: i don't know enough about hte HDFS code to fully understand what's 
going on here, but based on your description of the problem it definitely 
smells like a bug, so i've opened an issue ot make sure we don't lose 
track of it...

https://issues.apache.org/jira/browse/SOLR-6367


: Date: Fri, 1 Aug 2014 10:45:36 -0400
: From: Tom Chen tomchen1...@gmail.com
: Reply-To: dev@lucene.apache.org
: To: dev@lucene.apache.org
: Subject: Re: SolrCloud on HDFS empty tlog hence doesn't replay after Solr
: process crash and restart
: 
: I wonder if there's any update on this. Should we create a JIRA to track
: this?
: 
: Thanks,
: Tom
: 
: 
: On Mon, Jul 21, 2014 at 12:18 PM, Mark Miller markrmil...@gmail.com wrote:
: 
:  It’s on my list to investigate.
: 
:  --
:  Mark Miller
:  about.me/markrmiller
: 
:  On July 21, 2014 at 10:26:09 AM, Tom Chen (tomchen1...@gmail.com) wrote:
:   Any thought about this issue: Solr on HDFS generate empty tlog when add
:   documents without commit.
:  
:   Thanks,
:   Tom
:  
:  
:   On Fri, Jul 18, 2014 at 12:21 PM, Tom Chen wrote:
:  
:Hi,
:   
:This seems a bug for Solr running on HDFS.
:   
:Reproduce steps:
:1) Setup Solr to run on HDFS like this:
:   
:java -Dsolr.directoryFactory=HdfsDirectoryFactory
:-Dsolr.lock.type=hdfs
:-Dsolr.hdfs.home=hdfs://host:port/path
:   
:For the purpose of this testing, turn off the default auto commit in
:solrconfig.xml, i.e. comment out autoCommit like this:
:   
:   
:2) Add a document without commit:
:curl http://localhost:8983/solr/collection1/update?commit=false; -H
:Content-type:text/xml; charset=utf-8 --data-binary @solr.xml
:   
:3) Solr generate empty tlog file (0 file size, the last one ends with
:  6):
:[hadoop@hdtest042 exampledocs]$ hadoop fs -ls
:/path/collection1/core_node1/data/tlog
:Found 5 items
:-rw-r--r-- 1 hadoop hadoop 667 2014-07-18 08:47
:/path/collection1/core_node1/data/tlog/tlog.001
:-rw-r--r-- 1 hadoop hadoop 67 2014-07-18 08:47
:/path/collection1/core_node1/data/tlog/tlog.003
:-rw-r--r-- 1 hadoop hadoop 667 2014-07-18 08:47
:/path/collection1/core_node1/data/tlog/tlog.004
:-rw-r--r-- 1 hadoop hadoop 0 2014-07-18 09:02
:/path/collection1/core_node1/data/tlog/tlog.005
:-rw-r--r-- 1 hadoop hadoop 0 2014-07-18 09:02
:/path/collection1/core_node1/data/tlog/tlog.006
:   
:4) Simulate Solr crash by killing the process with -9 option.
:   
:5) restart the Solr process. Observation is that uncommitted document
:  are
:not replayed, files in tlog directory are cleaned up. Hence uncommitted
:document(s) is lost.
:   
:Am I missing anything or this is a bug?
:   
:BTW, additional observations:
:a) If in step 4) Solr is stopped gracefully (i.e. without -9 option),
:non-empty tlog file is geneated and after re-starting Solr, uncommitted
:document is replayed as expected.
:   
:b) If Solr doesn't run on HDFS (i.e. on local file system), this issue
:  is
:not observed either.
:   
:Thanks,
:Tom
:   
:  
: 
: 
:  -
:  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
:  For additional commands, e-mail: dev-h...@lucene.apache.org
: 
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-6367) empty tolg on HDFS when hard crash - no docs to replay on recovery

2014-08-12 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6367:
--

 Summary: empty tolg on HDFS when hard crash - no docs to replay on 
recovery
 Key: SOLR-6367
 URL: https://issues.apache.org/jira/browse/SOLR-6367
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man


Filing this bug based on an email to solr-user@lucene from Tom Chen (Fri, 18 
Jul 2014)...

{panel}
Reproduce steps:
1) Setup Solr to run on HDFS like this:

{noformat}
java -Dsolr.directoryFactory=HdfsDirectoryFactory
 -Dsolr.lock.type=hdfs
 -Dsolr.hdfs.home=hdfs://host:port/path
{noformat}

For the purpose of this testing, turn off the default auto commit in 
solrconfig.xml, i.e. comment out autoCommit like this:
{code}
!--
autoCommit
   maxTime${solr.autoCommit.maxTime:15000}/maxTime
   openSearcherfalse/openSearcher
 /autoCommit
--
{code}

2) Add a document without commit:
{{curl http://localhost:8983/solr/collection1/update?commit=false; -H
Content-type:text/xml; charset=utf-8 --data-binary @solr.xml}}

3) Solr generate empty tlog file (0 file size, the last one ends with 6):
{noformat}
[hadoop@hdtest042 exampledocs]$ hadoop fs -ls
/path/collection1/core_node1/data/tlog
Found 5 items
-rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
/path/collection1/core_node1/data/tlog/tlog.001
-rw-r--r--   1 hadoop hadoop 67 2014-07-18 08:47
/path/collection1/core_node1/data/tlog/tlog.003
-rw-r--r--   1 hadoop hadoop667 2014-07-18 08:47
/path/collection1/core_node1/data/tlog/tlog.004
-rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
/path/collection1/core_node1/data/tlog/tlog.005
-rw-r--r--   1 hadoop hadoop  0 2014-07-18 09:02
/path/collection1/core_node1/data/tlog/tlog.006
{noformat}

4) Simulate Solr crash by killing the process with -9 option.

5) restart the Solr process. Observation is that uncommitted document are
not replayed, files in tlog directory are cleaned up. Hence uncommitted
document(s) is lost.

Am I missing anything or this is a bug?

BTW, additional observations:
a) If in step 4) Solr is stopped gracefully (i.e. without -9 option),
non-empty tlog file is geneated and after re-starting Solr, uncommitted
document is replayed as expected.

b) If Solr doesn't run on HDFS (i.e. on local file system), this issue is
not observed either.
{panel}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3920) CloudSolrServer doesn't allow to index multiple collections with one instance of server

2014-08-12 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094381#comment-14094381
 ] 

Erick Erickson commented on SOLR-3920:
--

Try 

http://localhost:8983/solr/collection1/update?stream.body=commit/

That should at least successfully get to your server, tail the log and you 
should see
it come through.

 CloudSolrServer doesn't allow to index multiple collections with one instance 
 of server
 ---

 Key: SOLR-3920
 URL: https://issues.apache.org/jira/browse/SOLR-3920
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Grzegorz Sobczyk
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3920.patch


 With one instance of CloudSolrServer I can't add documents to multiple 
 collections, for example:
 {code}
 @Test
 public void shouldSendToSecondCore() throws Exception {
   //given
   try {
   CloudSolrServer server = new CloudSolrServer(localhost:9983);
   UpdateRequest commit1 = new UpdateRequest();
   commit1.setAction(ACTION.COMMIT, true, true);
   commit1.setParam(collection, collection1);
   //this commit is bug's cause
   commit1.process(server);
   
   SolrInputDocument doc = new SolrInputDocument();
   doc.addField(id, id);
   doc.addField(name, name);
   
   UpdateRequest update2 = new UpdateRequest();
   update2.setParam(collection, collection2);
   update2.add(doc);
   update2.process(server);
   
   UpdateRequest commit2 = new UpdateRequest();
   commit2.setAction(ACTION.COMMIT, true, true);
   commit2.setParam(collection, collection2);
   commit2.process(server);
   SolrQuery q1 = new SolrQuery(id:id);
   q1.set(collection, collection1);
   SolrQuery q2 = new SolrQuery(id:id);
   q2.set(collection, collection2);
   
   //when
   QueryResponse resp1 = server.query(q1);
   QueryResponse resp2 = server.query(q2);
   
   //then
   Assert.assertEquals(0L, resp1.getResults().getNumFound());
   Assert.assertEquals(1L, resp2.getResults().getNumFound());
   } finally {
   CloudSolrServer server1 = new CloudSolrServer(localhost:9983);
   server1.setDefaultCollection(collection1);
   server1.deleteByQuery(id:id);
   server1.commit(true, true);
   
   CloudSolrServer server2 = new CloudSolrServer(localhost:9983);
   server2.setDefaultCollection(collection2);
   server2.deleteByQuery(id:id);
   server2.commit(true, true);
   }
 }
 {code}
 Second update goes to first collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6346) Distributed Spellcheck has inconsistent collation ordering

2014-08-12 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer resolved SOLR-6346.
--

Resolution: Invalid

Reviewing comments from SOLR-2010, when this was all implemented, it seems 
having the collations return in a slightly different order when in a 
distributed configuration was done by design, as a performance trade-off.  
Also, the internal ranking works even with extended results.

 Distributed Spellcheck has inconsistent collation ordering
 --

 Key: SOLR-6346
 URL: https://issues.apache.org/jira/browse/SOLR-6346
 Project: Solr
  Issue Type: Bug
  Components: spellchecker
Affects Versions: 4.9
Reporter: James Dyer
Priority: Minor
 Attachments: SOLR-6346.patch


 While evaluating SOLR-3029, I found that the collationInternalRank that 
 shards pass to each other is broken.  It is not evaluated at all with 
 spellcheck.collateExtendedResults=true.  But even when evaluated, it does 
 not guarantee that collations will return ranked the same as if the request 
 was made from a non-distributed configuration.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5850) Constants#LUCENE_MAIN_VERSION can have broken values

2014-08-12 Thread Ryan Ernst (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5850?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Ernst updated LUCENE-5850:
---

Attachment: LUCENE-5850.patch

This patch consolidates LUCENE_MAIN_VERSION and associated parsing/comparison 
code into Version.java.  Specifically, it:
* Deprecates {{Constants.LUCENE_MAIN_VERSION}} and {{Constants.LUCENE_VERSION}}
* Makes {{Version}} a class instead of an enum.  This allows forward 
compatibility when parsing (e.g. 4.10.0 being able to read a 3.6.5 index that 
is as yet unreleased)
* Adds constants for specific releases (e.g. {{LUCENE_4_10_0}}) and deprecates 
older minor release constants (e.g. {{LUCENE_4_9}})
* Renames {{LUCENE_CURRENT}} constant  (but with deprecation for backcompat)  
to {{LATEST}}, and removes deprecation.  This now is a true alias for the 
latest version, and having latest deprecated, but not the actual latest version 
deprecated, was confusing.
* Changes all uses of {{StringHelper.getVersionComparator()}} to use 
{{Version.onOrAfter}}
* Makes {{SegmentInfo}} take a {{Version}}, instead of string
* Removes the display version (replaced with toString() of the latest 
version).  This didn't seem useful as it doesn't contain any interesting 
information, and would only contain extra information if built with svn (AFAICT)
* Adds {{Version.parse()}} which only parses dot based versions.  In general, I 
think everything should use this function and we should deprecate 
parseLeniently, but I've left the latter for now.
* Removes snapshot logic as far as the version is concerned in code (it was 
only used for the display version)

I've probably forgot some things.  All tests pass.

 Constants#LUCENE_MAIN_VERSION can have broken values 
 -

 Key: LUCENE-5850
 URL: https://issues.apache.org/jira/browse/LUCENE-5850
 Project: Lucene - Core
  Issue Type: Bug
  Components: general/build
Affects Versions: 4.3.1, 4.5.1
Reporter: Simon Willnauer
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5850.patch, LUCENE-5850.patch, LUCENE-5850.patch, 
 LUCENE-5850.patch, LUCENE-5850.patch, LUCENE-5850.patch, LUCENE-5850.patch, 
 LUCENE-5850_bomb.patch, LUCENE-5850_smoketester.patch


 Constants#LUCENE_MAIN_VERSION is set to the Lucene Main version and should 
 not contain minor versions. Well this is at least what I thought and to my 
 knowledge what the comments say too. Yet in for instance 4.3.1 and 4.5.1 we 
 broke this such that the version from SegmentsInfo can not be parsed with 
 Version#parseLeniently. IMO we should really add an assertion that this 
 constant doesn't throw an error and / or make the smoketester catch this. to 
 me this is actually a index BWC break. Note that 4.8.1 doesn't have this 
 problem...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3920) CloudSolrServer doesn't allow to index multiple collections with one instance of server

2014-08-12 Thread Zaytsev Sergey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094506#comment-14094506
 ] 

Zaytsev Sergey commented on SOLR-3920:
--

It seems to work.

Thanks a lot!

 CloudSolrServer doesn't allow to index multiple collections with one instance 
 of server
 ---

 Key: SOLR-3920
 URL: https://issues.apache.org/jira/browse/SOLR-3920
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.0-BETA
Reporter: Grzegorz Sobczyk
Assignee: Mark Miller
  Labels: 4.0.1_Candidate
 Fix For: 4.1, 5.0

 Attachments: SOLR-3920.patch


 With one instance of CloudSolrServer I can't add documents to multiple 
 collections, for example:
 {code}
 @Test
 public void shouldSendToSecondCore() throws Exception {
   //given
   try {
   CloudSolrServer server = new CloudSolrServer(localhost:9983);
   UpdateRequest commit1 = new UpdateRequest();
   commit1.setAction(ACTION.COMMIT, true, true);
   commit1.setParam(collection, collection1);
   //this commit is bug's cause
   commit1.process(server);
   
   SolrInputDocument doc = new SolrInputDocument();
   doc.addField(id, id);
   doc.addField(name, name);
   
   UpdateRequest update2 = new UpdateRequest();
   update2.setParam(collection, collection2);
   update2.add(doc);
   update2.process(server);
   
   UpdateRequest commit2 = new UpdateRequest();
   commit2.setAction(ACTION.COMMIT, true, true);
   commit2.setParam(collection, collection2);
   commit2.process(server);
   SolrQuery q1 = new SolrQuery(id:id);
   q1.set(collection, collection1);
   SolrQuery q2 = new SolrQuery(id:id);
   q2.set(collection, collection2);
   
   //when
   QueryResponse resp1 = server.query(q1);
   QueryResponse resp2 = server.query(q2);
   
   //then
   Assert.assertEquals(0L, resp1.getResults().getNumFound());
   Assert.assertEquals(1L, resp2.getResults().getNumFound());
   } finally {
   CloudSolrServer server1 = new CloudSolrServer(localhost:9983);
   server1.setDefaultCollection(collection1);
   server1.deleteByQuery(id:id);
   server1.commit(true, true);
   
   CloudSolrServer server2 = new CloudSolrServer(localhost:9983);
   server2.setDefaultCollection(collection2);
   server2.deleteByQuery(id:id);
   server2.commit(true, true);
   }
 }
 {code}
 Second update goes to first collection.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3029) Poor json formatting of spelling collation info

2014-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094569#comment-14094569
 ] 

ASF subversion and git services commented on SOLR-3029:
---

Commit 1617572 from jd...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1617572 ]

SOLR-3029: Spellcheck response format changes

 Poor json formatting of spelling collation info
 ---

 Key: SOLR-3029
 URL: https://issues.apache.org/jira/browse/SOLR-3029
 Project: Solr
  Issue Type: Bug
  Components: spellchecker
Affects Versions: 4.0-ALPHA
Reporter: Antony Stubbs
 Fix For: 4.9, 5.0

 Attachments: SOLR-3029.patch, SOLR-3029.patch, SOLR-3029.patch


 {noformat}
 spellcheck: {
 suggestions: [
 dalllas,
 {
 snip
 {
 word: canallas,
 freq: 1
 }
 ]
 },
 correctlySpelled,
 false,
 collation,
 dallas
 ]
 }
 {noformat}
 The correctlySpelled and collation key/values are stored as consecutive 
 elements in an array - quite odd. Is there a reason isn't not a key/value map 
 like most things?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3029) Poor json formatting of spelling collation info

2014-08-12 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer resolved SOLR-3029.
--

   Resolution: Fixed
Fix Version/s: (was: 4.9)
 Assignee: James Dyer

Committed to Trunk  added information about the response format change in 
CHANGES.txt for 5.0.

Thanks, Nalini!

 Poor json formatting of spelling collation info
 ---

 Key: SOLR-3029
 URL: https://issues.apache.org/jira/browse/SOLR-3029
 Project: Solr
  Issue Type: Bug
  Components: spellchecker
Affects Versions: 4.0-ALPHA
Reporter: Antony Stubbs
Assignee: James Dyer
 Fix For: 5.0

 Attachments: SOLR-3029.patch, SOLR-3029.patch, SOLR-3029.patch


 {noformat}
 spellcheck: {
 suggestions: [
 dalllas,
 {
 snip
 {
 word: canallas,
 freq: 1
 }
 ]
 },
 correctlySpelled,
 false,
 collation,
 dallas
 ]
 }
 {noformat}
 The correctlySpelled and collation key/values are stored as consecutive 
 elements in an array - quite odd. Is there a reason isn't not a key/value map 
 like most things?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094578#comment-14094578
 ] 

Erik Hatcher edited comment on SOLR-6365 at 8/12/14 7:54 PM:
-

bq. naming a bunch of params and using it as a reference in queries

+1, and I'll add a bit of interesting historical correlation to Ant's data 
types http://ant.apache.org/manual/using.html#path 

I'd suggest rather than trying to make the params be represented as HTTP query 
string fragments (a messy implementation detail, embedded solr usage for 
example doesn't need to talk HTTP or query strings), that they be lst 
name=defaultsstr name=param_nameparam_value/str/lst kind of format.  
In the spirit of the Ant, maybe something like:
{code}
  paramset id=my_facet_params
lst name=defaults
  str name=facet.fieldcategory/str
  !-- ... --
/lst
  paramset
{code}

And then request handlers could pick up one or more parameter sets such as 
/select?q=queryparamset=my_facet_params (or maybe 
paramsets=my_facet_params, so they can be in guaranteed order of 
evaluation).


was (Author: ehatcher):
bq. naming a bunch of params and using it as a reference in queries

+1, and I'll add a bit of interesting historical correlation to Ant's data 
types http://ant.apache.org/manual/using.html#path 

I'd suggest rather than trying to make the params be represented as HTTP query 
string fragments (a messy implementation detail, embedded solr usage for 
example doesn't need to talk HTTP or query strings), that they be lst 
name=defaultsstr name=param_nameparam_value/str/lst kind of format.  
In the spirit of the Ant, maybe something like:
{code}
  paramset id=my_facet_params
lst name=defaults
  str name=facet.fieldcategory/str
  !-- ... --
/lst
  paramset
{code}

And then request handlers could pick up one or more parameter sets such as 
/select?q=*:*paramset=my_facet_params (or maybe 
paramsets=my_facet_params, so they can be in guaranteed order of 
evaluation).

 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path/* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 !-- use json for all paths and _txt as the default search field--
 params path=/** defaults=wt=jsondf=_txt /
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094578#comment-14094578
 ] 

Erik Hatcher commented on SOLR-6365:


bq. naming a bunch of params and using it as a reference in queries

+1, and I'll add a bit of interesting historical correlation to Ant's data 
types http://ant.apache.org/manual/using.html#path 

I'd suggest rather than trying to make the params be represented as HTTP query 
string fragments (a messy implementation detail, embedded solr usage for 
example doesn't need to talk HTTP or query strings), that they be lst 
name=defaultsstr name=param_nameparam_value/str/lst kind of format.  
In the spirit of the Ant, maybe something like:
{code}
  paramset id=my_facet_params
lst name=defaults
  str name=facet.fieldcategory/str
  !-- ... --
/lst
  paramset
{code}

And then request handlers could pick up one or more parameter sets such as 
/select?q=*:*paramset=my_facet_params (or maybe 
paramsets=my_facet_params, so they can be in guaranteed order of 
evaluation).

 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path/* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 !-- use json for all paths and _txt as the default search field--
 params path=/** defaults=wt=jsondf=_txt /
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6365) specify appends, defaults, invariants outside of the component

2014-08-12 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094578#comment-14094578
 ] 

Erik Hatcher edited comment on SOLR-6365 at 8/12/14 7:54 PM:
-

bq. naming a bunch of params and using it as a reference in queries

+1, and I'll add a bit of interesting historical correlation to Ant's data 
types http://ant.apache.org/manual/using.html#path 

I'd suggest rather than trying to make the params be represented as HTTP query 
string fragments (a messy implementation detail, embedded solr usage for 
example doesn't need to talk HTTP or query strings), that they be lst 
name=defaultsstr name=param_nameparam_value/str/lst kind of format.  
In the spirit of the Ant, maybe something like:
{code}
  paramset id=my_facet_params
lst name=defaults
  str name=facet.fieldcategory/str
  !-- ... --
/lst
  /paramset
{code}

And then request handlers could pick up one or more parameter sets such as 
/select?q=queryparamset=my_facet_params (or maybe 
paramsets=my_facet_params, so they can be in guaranteed order of 
evaluation).


was (Author: ehatcher):
bq. naming a bunch of params and using it as a reference in queries

+1, and I'll add a bit of interesting historical correlation to Ant's data 
types http://ant.apache.org/manual/using.html#path 

I'd suggest rather than trying to make the params be represented as HTTP query 
string fragments (a messy implementation detail, embedded solr usage for 
example doesn't need to talk HTTP or query strings), that they be lst 
name=defaultsstr name=param_nameparam_value/str/lst kind of format.  
In the spirit of the Ant, maybe something like:
{code}
  paramset id=my_facet_params
lst name=defaults
  str name=facet.fieldcategory/str
  !-- ... --
/lst
  paramset
{code}

And then request handlers could pick up one or more parameter sets such as 
/select?q=queryparamset=my_facet_params (or maybe 
paramsets=my_facet_params, so they can be in guaranteed order of 
evaluation).

 specify  appends, defaults, invariants outside of the component
 ---

 Key: SOLR-6365
 URL: https://issues.apache.org/jira/browse/SOLR-6365
 Project: Solr
  Issue Type: Improvement
Reporter: Noble Paul
Assignee: Noble Paul

 The components are configured in solrconfig.xml mostly for specifying these 
 extra parameters. If we separate these out, we can avoid specifying the 
 components altogether and make solrconfig much simpler. Eventually we want 
 users to see all funtions as paths instead of components and control these 
 params from outside , through an API and persisted in ZK
 example
 {code:xml}
  !-- these are top level tags not specified inside any components --
 params  path=/dataimport defaults=config=data-config.xml/
 params path=/update/* defaults=wt=json/
 params path=/some-other-path/* defaults=a=bc=de=f invariants=x=y 
 appends=i=j/
 !-- use json for all paths and _txt as the default search field--
 params path=/** defaults=wt=jsondf=_txt /
 {code}
 The idea is to use the parameters in the  same format as we pass in the http 
 request and eliminate specifying our default components in solrconfig.xml
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2894) Implement distributed pivot faceting

2014-08-12 Thread Andrew Muldowney (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094745#comment-14094745
 ] 

Andrew Muldowney commented on SOLR-2894:


My previous results are crap. The logs were so full of trash their results are 
useless. After filtering out all refinement queries and other log lines that 
aren't genuine queries the results have changed significantly.

Old:
average 125.64ms @ 10273 queries
New:
average 131.29 @ 10279 queries

 

 Implement distributed pivot faceting
 

 Key: SOLR-2894
 URL: https://issues.apache.org/jira/browse/SOLR-2894
 Project: Solr
  Issue Type: Improvement
Reporter: Erik Hatcher
Assignee: Hoss Man
 Fix For: 4.9, 5.0

 Attachments: SOLR-2894-mincount-minification.patch, 
 SOLR-2894-reworked.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, SOLR-2894.patch, 
 SOLR-2894.patch, SOLR-2894_cloud_test.patch, dateToObject.patch, 
 pivot_mincount_problem.sh


 Following up on SOLR-792, pivot faceting currently only supports 
 undistributed mode.  Distributed pivot faceting needs to be implemented.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3029) Poor json formatting of spelling collation info

2014-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094747#comment-14094747
 ] 

ASF subversion and git services commented on SOLR-3029:
---

Commit 1617602 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1617602 ]

SOLR-3029: adjust /browse did-you-mean output to new collation response format

 Poor json formatting of spelling collation info
 ---

 Key: SOLR-3029
 URL: https://issues.apache.org/jira/browse/SOLR-3029
 Project: Solr
  Issue Type: Bug
  Components: spellchecker
Affects Versions: 4.0-ALPHA
Reporter: Antony Stubbs
Assignee: James Dyer
 Fix For: 5.0

 Attachments: SOLR-3029.patch, SOLR-3029.patch, SOLR-3029.patch


 {noformat}
 spellcheck: {
 suggestions: [
 dalllas,
 {
 snip
 {
 word: canallas,
 freq: 1
 }
 ]
 },
 correctlySpelled,
 false,
 collation,
 dallas
 ]
 }
 {noformat}
 The correctlySpelled and collation key/values are stored as consecutive 
 elements in an array - quite odd. Is there a reason isn't not a key/value map 
 like most things?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6062) Phrase queries are created for each field supplied through edismax's pf, pf2 and pf3 parameters (rather them being combined in a single dismax query)

2014-08-12 Thread Michael Dodsworth (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094812#comment-14094812
 ] 

Michael Dodsworth commented on SOLR-6062:
-

[~jdyer], any feedback?

 Phrase queries are created for each field supplied through edismax's pf, pf2 
 and pf3 parameters (rather them being combined in a single dismax query)
 -

 Key: SOLR-6062
 URL: https://issues.apache.org/jira/browse/SOLR-6062
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.0
Reporter: Michael Dodsworth
Priority: Minor
 Attachments: combined-phrased-dismax.patch


 https://issues.apache.org/jira/browse/SOLR-2058 subtly changed how phrase 
 queries, created through the pf, pf2 and pf3 parameters, are merged into the 
 main user query.
 For the query: 'term1 term2' with pf2:[field1, field2, field3] we now get 
 (omitting the non phrase query section for clarity):
 {code:java}
 main query
 DisjunctionMaxQuery((field1:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field2:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field3:term1 term2^1.0)~0.1)
 {code}
 Prior to this change, we had:
 {code:java}
 main query 
 DisjunctionMaxQuery((field1:term1 term2^1.0 | field2:term1 term2^1.0 | 
 field3:term1 term2^1.0)~0.1)
 {code}
 The upshot being that if the phrase query term1 term2 appears in multiple 
 fields, it will get a significant boost over the previous implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6212) upgrade Saxon-HE to 9.5.1-5 and reinstate Morphline tests that were affected under java 8/9 with 9.5.1-4

2014-08-12 Thread Michael Dodsworth (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094814#comment-14094814
 ] 

Michael Dodsworth commented on SOLR-6212:
-

[~markrmil...@gmail.com], any feedback?

 upgrade Saxon-HE to 9.5.1-5 and reinstate Morphline tests that were affected 
 under java 8/9 with 9.5.1-4
 

 Key: SOLR-6212
 URL: https://issues.apache.org/jira/browse/SOLR-6212
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7, 5.0
Reporter: Michael Dodsworth
Assignee: Mark Miller
Priority: Minor
 Attachments: SOLR-6212.patch


 From SOLR-1301:
 For posterity, there is a thread on the dev list where we are working 
 through an issue with Saxon on java 8 and ibm's j9. Wolfgang filed 
 https://saxonica.plan.io/issues/1944 upstream. (Saxon is pulled in via 
 cdk-morphlines-saxon).
 Due to this issue, several Morphline tests were made to be 'ignored' in java 
 8+. The Saxon issue has been fixed in 9.5.1-5, so we should upgrade and 
 reinstate those tests.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5882) add 4.10 docvaluesformat

2014-08-12 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5882:
---

 Summary: add 4.10 docvaluesformat
 Key: LUCENE-5882
 URL: https://issues.apache.org/jira/browse/LUCENE-5882
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir


We can improve the current format in a few ways:
* speed up Sorted/SortedSet byte[] lookup by structuring the term blocks 
differently (allow random access, more efficient bulk i/o)
* speed up reverse lookup by adding a reverse index (small: just every 1024'th 
term with useless suffixes removed).
* use slice API for access to access to binary content, too.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5882) add 4.10 docvaluesformat

2014-08-12 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5882:


Attachment: LUCENE-5882.patch

Patch. 

Also when cardinality is low (there would be no reverse index), compression 
saves very little RAM, so just encode as variable binary for a little extra 
speed since its going to be under 8KB ram for addressing anyway.

 add 4.10 docvaluesformat
 

 Key: LUCENE-5882
 URL: https://issues.apache.org/jira/browse/LUCENE-5882
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Attachments: LUCENE-5882.patch


 We can improve the current format in a few ways:
 * speed up Sorted/SortedSet byte[] lookup by structuring the term blocks 
 differently (allow random access, more efficient bulk i/o)
 * speed up reverse lookup by adding a reverse index (small: just every 
 1024'th term with useless suffixes removed).
 * use slice API for access to access to binary content, too.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-4.x #676: POMs out of sync

2014-08-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-4.x/676/

No tests ran.

Build Log:
[...truncated 25235 lines...]
BUILD FAILED
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:490: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/build.xml:171: 
The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/lucene/build.xml:492:
 The following error occurred while executing this line:
/usr/home/hudson/hudson-slave/workspace/Lucene-Solr-Maven-4.x/lucene/common-build.xml:543:
 Error deploying artifact 'org.apache.lucene:lucene-solr-grandparent:pom': 
Error retrieving previous build number for artifact 
'org.apache.lucene:lucene-solr-grandparent:pom': repository metadata for: 
'snapshot org.apache.lucene:lucene-solr-grandparent:4.10-SNAPSHOT' could not be 
retrieved from repository: apache.snapshots.https due to an error: Error 
transferring file: Server returned HTTP response code: 502 for URL: 
https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-solr-grandparent/4.10-SNAPSHOT/maven-metadata.xml

Total time: 13 minutes 21 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-6062) Phrase queries are created for each field supplied through edismax's pf, pf2 and pf3 parameters (rather them being combined in a single dismax query)

2014-08-12 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6062:
---

Description: 
SOLR-2058 subtly changed how phrase queries, created through the pf, pf2 and 
pf3 parameters, are merged into the main user query.

For the query: 'term1 term2' with pf2:[field1, field2, field3] we now get 
(omitting the non phrase query section for clarity):

{code:java}
main query
DisjunctionMaxQuery((field1:term1 term2^1.0)~0.1)
DisjunctionMaxQuery((field2:term1 term2^1.0)~0.1)
DisjunctionMaxQuery((field3:term1 term2^1.0)~0.1)
{code}

Prior to this change, we had:

{code:java}
main query 
DisjunctionMaxQuery((field1:term1 term2^1.0 | field2:term1 term2^1.0 | 
field3:term1 term2^1.0)~0.1)
{code}

The upshot being that if the phrase query term1 term2 appears in multiple 
fields, it will get a significant boost over the previous implementation.

  was:
https://issues.apache.org/jira/browse/SOLR-2058 subtly changed how phrase 
queries, created through the pf, pf2 and pf3 parameters, are merged into the 
main user query.

For the query: 'term1 term2' with pf2:[field1, field2, field3] we now get 
(omitting the non phrase query section for clarity):

{code:java}
main query
DisjunctionMaxQuery((field1:term1 term2^1.0)~0.1)
DisjunctionMaxQuery((field2:term1 term2^1.0)~0.1)
DisjunctionMaxQuery((field3:term1 term2^1.0)~0.1)
{code}

Prior to this change, we had:

{code:java}
main query 
DisjunctionMaxQuery((field1:term1 term2^1.0 | field2:term1 term2^1.0 | 
field3:term1 term2^1.0)~0.1)
{code}

The upshot being that if the phrase query term1 term2 appears in multiple 
fields, it will get a significant boost over the previous implementation.


 Phrase queries are created for each field supplied through edismax's pf, pf2 
 and pf3 parameters (rather them being combined in a single dismax query)
 -

 Key: SOLR-6062
 URL: https://issues.apache.org/jira/browse/SOLR-6062
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.0
Reporter: Michael Dodsworth
Priority: Minor
 Attachments: combined-phrased-dismax.patch


 SOLR-2058 subtly changed how phrase queries, created through the pf, pf2 and 
 pf3 parameters, are merged into the main user query.
 For the query: 'term1 term2' with pf2:[field1, field2, field3] we now get 
 (omitting the non phrase query section for clarity):
 {code:java}
 main query
 DisjunctionMaxQuery((field1:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field2:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field3:term1 term2^1.0)~0.1)
 {code}
 Prior to this change, we had:
 {code:java}
 main query 
 DisjunctionMaxQuery((field1:term1 term2^1.0 | field2:term1 term2^1.0 | 
 field3:term1 term2^1.0)~0.1)
 {code}
 The upshot being that if the phrase query term1 term2 appears in multiple 
 fields, it will get a significant boost over the previous implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6062) Phrase queries are created for each field supplied through edismax's pf, pf2 and pf3 parameters (rather them being combined in a single dismax query)

2014-08-12 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094874#comment-14094874
 ] 

Erik Hatcher commented on SOLR-6062:


[~mdodswo...@salesforce.com] looks reasonable to me, but I think best if we get 
some others deep into this stuff to weigh in.  One typo, looks like, in the 
patch is WORK_GRAM_EXTRACTOR, should be WORD.

 Phrase queries are created for each field supplied through edismax's pf, pf2 
 and pf3 parameters (rather them being combined in a single dismax query)
 -

 Key: SOLR-6062
 URL: https://issues.apache.org/jira/browse/SOLR-6062
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.0
Reporter: Michael Dodsworth
Priority: Minor
 Attachments: combined-phrased-dismax.patch


 SOLR-2058 subtly changed how phrase queries, created through the pf, pf2 and 
 pf3 parameters, are merged into the main user query.
 For the query: 'term1 term2' with pf2:[field1, field2, field3] we now get 
 (omitting the non phrase query section for clarity):
 {code:java}
 main query
 DisjunctionMaxQuery((field1:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field2:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field3:term1 term2^1.0)~0.1)
 {code}
 Prior to this change, we had:
 {code:java}
 main query 
 DisjunctionMaxQuery((field1:term1 term2^1.0 | field2:term1 term2^1.0 | 
 field3:term1 term2^1.0)~0.1)
 {code}
 The upshot being that if the phrase query term1 term2 appears in multiple 
 fields, it will get a significant boost over the previous implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6062) Phrase queries are created for each field supplied through edismax's pf, pf2 and pf3 parameters (rather them being combined in a single dismax query)

2014-08-12 Thread Michael Dodsworth (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094907#comment-14094907
 ] 

Michael Dodsworth commented on SOLR-6062:
-

Thanks for looking at this, [~ehatcher]. Any suggestions on folks to pull in?

 Phrase queries are created for each field supplied through edismax's pf, pf2 
 and pf3 parameters (rather them being combined in a single dismax query)
 -

 Key: SOLR-6062
 URL: https://issues.apache.org/jira/browse/SOLR-6062
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.0
Reporter: Michael Dodsworth
Priority: Minor
 Attachments: combined-phrased-dismax.patch


 SOLR-2058 subtly changed how phrase queries, created through the pf, pf2 and 
 pf3 parameters, are merged into the main user query.
 For the query: 'term1 term2' with pf2:[field1, field2, field3] we now get 
 (omitting the non phrase query section for clarity):
 {code:java}
 main query
 DisjunctionMaxQuery((field1:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field2:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field3:term1 term2^1.0)~0.1)
 {code}
 Prior to this change, we had:
 {code:java}
 main query 
 DisjunctionMaxQuery((field1:term1 term2^1.0 | field2:term1 term2^1.0 | 
 field3:term1 term2^1.0)~0.1)
 {code}
 The upshot being that if the phrase query term1 term2 appears in multiple 
 fields, it will get a significant boost over the previous implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6062) Phrase queries are created for each field supplied through edismax's pf, pf2 and pf3 parameters (rather them being combined in a single dismax query)

2014-08-12 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094926#comment-14094926
 ] 

Erik Hatcher commented on SOLR-6062:


bq. Any suggestions on folks to pull in?

I guess maybe easier to ask if there are any objections or cons to making this 
change by anyone?   Seems like there's a bit of agreement that SOLR-2058 made 
things worse and this is better.  Any negatives?

 Phrase queries are created for each field supplied through edismax's pf, pf2 
 and pf3 parameters (rather them being combined in a single dismax query)
 -

 Key: SOLR-6062
 URL: https://issues.apache.org/jira/browse/SOLR-6062
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.0
Reporter: Michael Dodsworth
Priority: Minor
 Attachments: combined-phrased-dismax.patch


 SOLR-2058 subtly changed how phrase queries, created through the pf, pf2 and 
 pf3 parameters, are merged into the main user query.
 For the query: 'term1 term2' with pf2:[field1, field2, field3] we now get 
 (omitting the non phrase query section for clarity):
 {code:java}
 main query
 DisjunctionMaxQuery((field1:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field2:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field3:term1 term2^1.0)~0.1)
 {code}
 Prior to this change, we had:
 {code:java}
 main query 
 DisjunctionMaxQuery((field1:term1 term2^1.0 | field2:term1 term2^1.0 | 
 field3:term1 term2^1.0)~0.1)
 {code}
 The upshot being that if the phrase query term1 term2 appears in multiple 
 fields, it will get a significant boost over the previous implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6062) Phrase queries are created for each field supplied through edismax's pf, pf2 and pf3 parameters (rather them being combined in a single dismax query)

2014-08-12 Thread Michael Dodsworth (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14094935#comment-14094935
 ] 

Michael Dodsworth commented on SOLR-6062:
-

not that I know of -- the wanted behavior of SOLR-2058 is supported (by 
supplying different slop values for the same field) as well as the original 
behavior.

 Phrase queries are created for each field supplied through edismax's pf, pf2 
 and pf3 parameters (rather them being combined in a single dismax query)
 -

 Key: SOLR-6062
 URL: https://issues.apache.org/jira/browse/SOLR-6062
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.0
Reporter: Michael Dodsworth
Priority: Minor
 Attachments: combined-phrased-dismax.patch


 SOLR-2058 subtly changed how phrase queries, created through the pf, pf2 and 
 pf3 parameters, are merged into the main user query.
 For the query: 'term1 term2' with pf2:[field1, field2, field3] we now get 
 (omitting the non phrase query section for clarity):
 {code:java}
 main query
 DisjunctionMaxQuery((field1:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field2:term1 term2^1.0)~0.1)
 DisjunctionMaxQuery((field3:term1 term2^1.0)~0.1)
 {code}
 Prior to this change, we had:
 {code:java}
 main query 
 DisjunctionMaxQuery((field1:term1 term2^1.0 | field2:term1 term2^1.0 | 
 field3:term1 term2^1.0)~0.1)
 {code}
 The upshot being that if the phrase query term1 term2 appears in multiple 
 fields, it will get a significant boost over the previous implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_65) - Build # 11011 - Failure!

2014-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11011/
Java: 32bit/jdk1.7.0_65 -server -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.OverseerTest.testOverseerFailure

Error Message:
Could not register as the leader because creating the ephemeral registration 
node in ZooKeeper failed

Stack Trace:
org.apache.solr.common.SolrException: Could not register as the leader because 
creating the ephemeral registration node in ZooKeeper failed
at 
__randomizedtesting.SeedInfo.seed([FBF4C1976E4175FD:FFFC4E647CE49ADC]:0)
at 
org.apache.solr.cloud.ShardLeaderElectionContextBase.runLeaderProcess(ElectionContext.java:144)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:155)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:314)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
at 
org.apache.solr.cloud.OverseerTest$MockZKController.publishState(OverseerTest.java:155)
at 
org.apache.solr.cloud.OverseerTest.testOverseerFailure(OverseerTest.java:660)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 

[jira] [Updated] (SOLR-3711) Velocity: Break or truncate long strings in facet output

2014-08-12 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-3711:
---

Attachment: SOLR-3711.patch

[~janhoy] - how's this?   If this general idea works for you, I will generalize 
it to work for all facet labels (not just facet fields).  What's the best 
truncation size and suffix string?  I used 25 and the default of    The 
title of the facet filter links, on mouse hover, is the full untruncated value.

 Velocity: Break or truncate long strings in facet output
 

 Key: SOLR-3711
 URL: https://issues.apache.org/jira/browse/SOLR-3711
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Reporter: Jan Høydahl
Assignee: Erik Hatcher
  Labels: /browse
 Fix For: 5.0

 Attachments: SOLR-3711.patch


 In Solritas /browse GUI, if facets contain very long strings (such as 
 content-type tend to do), currently the too long text runs over the main 
 column and it is not pretty.
 Perhaps inserting a Soft Hyphen shy; 
 (http://en.wikipedia.org/wiki/Soft_hyphen) at position N in very long terms 
 is a solution?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-3711) Velocity: Break or truncate long strings in facet output

2014-08-12 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14095027#comment-14095027
 ] 

Erik Hatcher edited comment on SOLR-3711 at 8/13/14 1:54 AM:
-

[~janhoy] - how's this?   If this general idea works for you, I will generalize 
it to work for all facet labels (not just facet fields).  What's the best 
truncation size and suffix string?  I used 20 and the default of ... in this 
initial patch.   The title of the facet filter links, on mouse hover, is the 
full untruncated value.


was (Author: ehatcher):
[~janhoy] - how's this?   If this general idea works for you, I will generalize 
it to work for all facet labels (not just facet fields).  What's the best 
truncation size and suffix string?  I used 25 and the default of    The 
title of the facet filter links, on mouse hover, is the full untruncated value.

 Velocity: Break or truncate long strings in facet output
 

 Key: SOLR-3711
 URL: https://issues.apache.org/jira/browse/SOLR-3711
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Reporter: Jan Høydahl
Assignee: Erik Hatcher
  Labels: /browse
 Fix For: 5.0

 Attachments: SOLR-3711.patch


 In Solritas /browse GUI, if facets contain very long strings (such as 
 content-type tend to do), currently the too long text runs over the main 
 column and it is not pretty.
 Perhaps inserting a Soft Hyphen shy; 
 (http://en.wikipedia.org/wiki/Soft_hyphen) at position N in very long terms 
 is a solution?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3957) Remove response WARNING of This response format is experimental

2014-08-12 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14095042#comment-14095042
 ] 

ASF subversion and git services commented on SOLR-3957:
---

Commit 1617651 from [~ehatcher] in branch 'dev/trunk'
[ https://svn.apache.org/r1617651 ]

SOLR-3957: Removed experimental warning from a couple of Solr handlers

 Remove response WARNING of This response format is experimental
 ---

 Key: SOLR-3957
 URL: https://issues.apache.org/jira/browse/SOLR-3957
 Project: Solr
  Issue Type: Wish
Affects Versions: 4.0
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-3957.patch


 Remove all the useless (which I daresay is all of them) response WARNINGs 
 stating This response format is experimental.
 At this point, all of these are more than just experimental, and even if so 
 things are subject to change and in most cases can be done in a compatible 
 manner anyway.
 Less noise.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3957) Remove response WARNING of This response format is experimental

2014-08-12 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-3957.


Resolution: Fixed

I think best to just do this on trunk/5.0, and leave it alone for 4x.  I can't 
imagine someone relies on it, but maybe someone does have a custom component 
that calls addExperimentalFormatWarning in which case they'll have to remove it 
(or clone it) when going to 5.0.

 Remove response WARNING of This response format is experimental
 ---

 Key: SOLR-3957
 URL: https://issues.apache.org/jira/browse/SOLR-3957
 Project: Solr
  Issue Type: Wish
Affects Versions: 4.0
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0

 Attachments: SOLR-3957.patch


 Remove all the useless (which I daresay is all of them) response WARNINGs 
 stating This response format is experimental.
 At this point, all of these are more than just experimental, and even if so 
 things are subject to change and in most cases can be done in a compatible 
 manner anyway.
 Less noise.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4685) JSON response write modification to support RAW JSON

2014-08-12 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-4685:
--

Assignee: (was: Erik Hatcher)

[~billnbell] - Bill, sorry this hasn't yet been addressed to your satisfaction. 
 I'm not sure what motivated me initially to assign this to myself.  After 
looking at it a bit more and realizing I don't have any personal/professional 
motivation, I'm unassigning myself.  It'd be interesting to hear of others that 
are either using your patch or desire it.

 JSON response write modification to support RAW JSON
 

 Key: SOLR-4685
 URL: https://issues.apache.org/jira/browse/SOLR-4685
 Project: Solr
  Issue Type: Improvement
Reporter: Bill Bell
Priority: Minor
 Attachments: SOLR-4685.1.patch, SOLR-4685.SOLR_4_5.patch


 If the field ends with _json allow the field to return raw JSON.
 For example the field,
 office_json -- string
 I already put into the field raw JSON already escaped. I want it to come with 
 no double quotes and not escaped.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Windows (64bit/jdk1.8.0_11) - Build # 4154 - Failure!

2014-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/4154/
Java: 64bit/jdk1.8.0_11 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 11958 lines...]
   [junit4] JVM J0: stdout was not empty, see: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\temp\junit4-J0-20140813_020736_219.sysout
   [junit4]  JVM J0: stdout (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  EXCEPTION_ACCESS_VIOLATION (0xc005) at 
pc=0x7301dc76, pid=9612, tid=7388
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (8.0_11-b12) (build 
1.8.0_11-b12)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.11-b03 mixed mode 
windows-amd64 compressed oops)
   [junit4] # Problematic frame:
   [junit4] # V  [jvm.dll+0x23dc76]
   [junit4] #
   [junit4] # Failed to write core dump. Minidumps are not enabled by default 
on client versions of Windows
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\J0\hs_err_pid9612.log
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.sun.com/bugreport/crash.jsp
   [junit4] #
   [junit4]  JVM J0: EOF 

[...truncated 1 lines...]
   [junit4] ERROR: JVM J0 ended with an exception, command line: 
C:\Users\JenkinsSlave\tools\java\64bit\jdk1.8.0_11\jre\bin\java.exe 
-XX:+UseCompressedOops -XX:+UseParallelGC -XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\heapdumps
 -Dtests.prefix=tests -Dtests.seed=83E86614D6A6EECE -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.10 
-Dtests.cleanthreads=perClass 
-Djava.util.logging.config.file=C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\tools\junit4\logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts.gracious=false -Dtests.multiplier=1 
-DtempDir=. -Djava.io.tmpdir=. 
-Djunit4.tempDir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\solr\build\solr-core\test\temp
 
-Dclover.db.dir=C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\build\clover\db
 -Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Djava.security.policy=C:\Users\JenkinsSlave\workspace\Lucene-Solr-4.x-Windows\lucene\tools\junit4\tests.policy
 -Dlucene.version=4.10-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.leaveTemporary=false -Dtests.filterstacks=true -Dtests.disableHdfs=true 
-Dfile.encoding=ISO-8859-1 -classpath 

[JENKINS] Lucene-Solr-NightlyTests-4.x - Build # 596 - Still Failing

2014-08-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-4.x/596/

4 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDistribSearch

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:25801/_h/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:25801/_h/collection1
at 
__randomizedtesting.SeedInfo.seed([E7864240DC5F2611:6660CC58AB00462D]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:562)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:223)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:165)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrServer(FullSolrCloudDistribCmdsTest.java:414)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.doTest(FullSolrCloudDistribCmdsTest.java:144)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Am I just completely missing the boat or is suggester weightField totally undocumented?

2014-08-12 Thread Erick Erickson
Upgrading from Solr 3.5 to 4.8 for a client and mucking around in
solrconfig.xml. I thought Hey, let's uncomment the suggester just for yucks
so they can try it out.

Fine so far, but then I looked to try to figure out what part of their
schema to use for the entry in the suggest component:
str name=weightFieldprice/str

and there isn't anything in any documentation that I can find. If I
take it out I get an error at startup since it's a required field. For
instance, it's not in the ref guide. Google doesn't help. Looking at the
code also doesn't explain much, and isn't very friendly anyway. I _suspect_
that it's ok to have a bogus entry here since the code seems to return 0 if
there's no field (on a very quick glance) in which case it seems like it's
_not_ really required.

So.. what happens if the field is absent? What kinds of values _should_ be
in it? What do they do? and all that rot

If we're going to require it, we should provide some guidance somewhere.
Worth a JIRA?

Erick


[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_65) - Build # 11012 - Still Failing!

2014-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11012/
Java: 32bit/jdk1.7.0_65 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.OverseerTest.testOverseerFailure

Error Message:
Could not register as the leader because creating the ephemeral registration 
node in ZooKeeper failed

Stack Trace:
org.apache.solr.common.SolrException: Could not register as the leader because 
creating the ephemeral registration node in ZooKeeper failed
at 
__randomizedtesting.SeedInfo.seed([6641C8D44BE5DD9F:62494727594032BE]:0)
at 
org.apache.solr.cloud.ShardLeaderElectionContextBase.runLeaderProcess(ElectionContext.java:144)
at 
org.apache.solr.cloud.LeaderElector.runIamLeaderProcess(LeaderElector.java:163)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:125)
at 
org.apache.solr.cloud.LeaderElector.checkIfIamLeader(LeaderElector.java:155)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:314)
at 
org.apache.solr.cloud.LeaderElector.joinElection(LeaderElector.java:221)
at 
org.apache.solr.cloud.OverseerTest$MockZKController.publishState(OverseerTest.java:155)
at 
org.apache.solr.cloud.OverseerTest.testOverseerFailure(OverseerTest.java:660)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 

Re: Am I just completely missing the boat or is suggester weightField totally undocumented?

2014-08-12 Thread Varun Thacker
Hi Erick,

Good that you brought it up. I started working on drafting out the
documentation for the Suggester.

SOLR-5683 is the Jira for the documentation. This is what was written
originally -
DocumentDictionaryFactory – user can specify suggestion field along with
optional weight and payload fields from their search index.

Although like you said the weight field is currently not optional.

Answering your question on what do they do - They are used for sorting the
results since there is no other relevance factor involved.

For a user who doesn't have weight calculated for his entries, just simply
getting to use AnalyzingInfixSuggester or some other suggester by itself is
beneficial. If more people agree I could create a Jira and work on making
the weight field optional.






On Wed, Aug 13, 2014 at 9:43 AM, Erick Erickson erickerick...@gmail.com
wrote:

 Upgrading from Solr 3.5 to 4.8 for a client and mucking around in
 solrconfig.xml. I thought Hey, let's uncomment the suggester just for yucks
 so they can try it out.

 Fine so far, but then I looked to try to figure out what part of their
 schema to use for the entry in the suggest component:
 str name=weightFieldprice/str

 and there isn't anything in any documentation that I can find. If I
 take it out I get an error at startup since it's a required field. For
 instance, it's not in the ref guide. Google doesn't help. Looking at the
 code also doesn't explain much, and isn't very friendly anyway. I _suspect_
 that it's ok to have a bogus entry here since the code seems to return 0 if
 there's no field (on a very quick glance) in which case it seems like it's
 _not_ really required.

 So.. what happens if the field is absent? What kinds of values _should_ be
 in it? What do they do? and all that rot

 If we're going to require it, we should provide some guidance somewhere.
 Worth a JIRA?

 Erick




-- 


Regards,
Varun Thacker
http://www.vthacker.in/


[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.7.0_65) - Build # 10893 - Failure!

2014-08-12 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10893/
Java: 64bit/jdk1.7.0_65 -XX:-UseCompressedOops -XX:+UseG1GC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([53B19485E404282D]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:332)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:617)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:180)
at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

Error Message:
4 threads leaked from SUITE scope at 
org.apache.solr.rest.TestManagedResourceStorage: 1) Thread[id=2600, 
name=Thread-1021, state=TIMED_WAITING, group=Overseer state updater.] 
at java.lang.Object.wait(Native Method) at 
org.apache.solr.cloud.DistributedQueue$LatchChildWatcher.await(DistributedQueue.java:266)
 at 
org.apache.solr.cloud.DistributedQueue.getChildren(DistributedQueue.java:309)   
  at org.apache.solr.cloud.DistributedQueue.peek(DistributedQueue.java:582) 
at 
org.apache.solr.cloud.DistributedQueue.peek(DistributedQueue.java:560) 
at org.apache.solr.cloud.Overseer$ClusterStateUpdater.run(Overseer.java:215)
 at java.lang.Thread.run(Thread.java:745)2) Thread[id=2607, 
name=coreZkRegister-1531-thread-1, state=WAITING, 
group=TGRP-TestManagedResourceStorage] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1068)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=2604, 
name=searcherExecutor-1537-thread-1, state=WAITING, 
group=TGRP-TestManagedResourceStorage] at sun.misc.Unsafe.park(Native 
Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043)
 at 

[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1183: POMs out of sync

2014-08-12 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1183/

2 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandlerBackup.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=4073, name=Thread-1606, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:314)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=4073, name=Thread-1606, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:314)
at __randomizedtesting.SeedInfo.seed([9345D8B5CABACFEA]:0)


FAILED:  
org.apache.solr.handler.TestReplicationHandlerBackup.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
There are still zombie threads that couldn't be terminated:
   1) Thread[id=4073, name=Thread-1606, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:314)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=4073, name=Thread-1606, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at