[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.8.0_20-ea-b23) - Build # 4131 - Failure!

2014-08-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/4131/
Java: 32bit/jdk1.8.0_20-ea-b23 -server -XX:+UseG1GC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
We have a failed SPLITSHARD task

Stack Trace:
java.lang.AssertionError: We have a failed SPLITSHARD task
at 
__randomizedtesting.SeedInfo.seed([33A8FE5968111D95:B24E70411F4E7DA9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testTaskExclusivity(MultiThreadedOCPTest.java:125)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.doTest(MultiThreadedOCPTest.java:71)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Updated] (LUCENE-5864) Split BytesRef into BytesRef and BytesRefBuilder

2014-08-01 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5864:
-

Attachment: LUCENE-5864.patch

Removed a println that I had added for debugging.

 Split BytesRef into BytesRef and BytesRefBuilder
 

 Key: LUCENE-5864
 URL: https://issues.apache.org/jira/browse/LUCENE-5864
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand
Assignee: Adrien Grand
 Fix For: 4.10

 Attachments: LUCENE-5864.patch, LUCENE-5864.patch, LUCENE-5864.patch


 Follow-up of LUCENE-5836.
 The fact that BytesRef (and CharsRef, IntsRef, LongsRef) can be used as 
 either pointers to a section of a byte[] or as buffers raises issues. The 
 idea would be to keep BytesRef but remove all the buffer methods like 
 copyBytes, grow, etc. and add a new class BytesRefBuilder that wraps a byte[] 
 and a length (but no offset), has grow/copyBytes/copyChars methods and the 
 ability to build BytesRef instances.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5865) create fork of analyzers module without Version

2014-08-01 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5865:
---

 Summary: create fork of analyzers module without Version
 Key: LUCENE-5865
 URL: https://issues.apache.org/jira/browse/LUCENE-5865
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 4.10


Since this is obviously too controversial to fix, we can just add 
*alternatives* that don't have this messy api, e.g. under analyzers-simple.

These won't have Version. They don't need factories, because they are actually 
geared at being usable for lucene users.

Once nice thing is, this way the problem can be fixed in 4.10



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5865) create fork of analyzers module without Version

2014-08-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082020#comment-14082020
 ] 

Robert Muir commented on LUCENE-5865:
-

Unlike the existing analyzers module, the big freedom here is that back compat 
isnt provided. 

 create fork of analyzers module without Version
 ---

 Key: LUCENE-5865
 URL: https://issues.apache.org/jira/browse/LUCENE-5865
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 4.10


 Since this is obviously too controversial to fix, we can just add 
 *alternatives* that don't have this messy api, e.g. under analyzers-simple.
 These won't have Version. They don't need factories, because they are 
 actually geared at being usable for lucene users.
 Once nice thing is, this way the problem can be fixed in 4.10



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.8.0_11) - Build # 10816 - Still Failing!

2014-08-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10816/
Java: 32bit/jdk1.8.0_11 -server -XX:+UseG1GC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
ERROR: SolrZkClient opens=20 closes=19

Stack Trace:
java.lang.AssertionError: ERROR: SolrZkClient opens=20 closes=19
at __randomizedtesting.SeedInfo.seed([1B0FC9BA48BC9D79]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingZkClients(SolrTestCaseJ4.java:438)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:183)
at sun.reflect.GeneratedMethodAccessor29.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
4 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=5290, 
name=TEST-CollectionsAPIDistributedZkTest.testDistribSearch-seed#[1B0FC9BA48BC9D79]-SendThread(127.0.0.1:40204),
 state=TIMED_WAITING, group=TGRP-CollectionsAPIDistributedZkTest] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:101)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:940)
 at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1003)
2) Thread[id=5382, name=zkCallback-816-thread-3, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=5406, 
name=zkCallback-816-thread-4, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 

[jira] [Updated] (LUCENE-5865) create fork of analyzers module without Version

2014-08-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5865:


Attachment: LUCENE-5865.patch

Attached is a patch. 

I think this provides an alternative, in a case where we disagree. 

 create fork of analyzers module without Version
 ---

 Key: LUCENE-5865
 URL: https://issues.apache.org/jira/browse/LUCENE-5865
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 4.10

 Attachments: LUCENE-5865.patch


 Since this is obviously too controversial to fix, we can just add 
 *alternatives* that don't have this messy api, e.g. under analyzers-simple.
 These won't have Version. They don't need factories, because they are 
 actually geared at being usable for lucene users.
 Once nice thing is, this way the problem can be fixed in 4.10



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2199) DIH JdbcDataSource - Support multiple resultsets

2014-08-01 Thread Thomas Champagne (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082073#comment-14082073
 ] 

Thomas Champagne commented on SOLR-2199:


I don't understand why this patch have not been applied. It is very small and 
uncomplicated. It's a good idea to read multiple resultset from a single 
statement.

But in my case, i would like set SQL session parameters before executing the 
query. 

For example with a Postgresql example : 
{code:sql}
SET join_collapse_limit=1;
SELECT * FROM library.book b
LEFT JOIN library.page p ON p.id_book=b.id_book;
{code}
In this example, the first resultset is empty but with the patch, the second 
resultset would be read.

 DIH JdbcDataSource - Support multiple resultsets
 

 Key: SOLR-2199
 URL: https://issues.apache.org/jira/browse/SOLR-2199
 Project: Solr
  Issue Type: Improvement
  Components: contrib - DataImportHandler
Affects Versions: 1.4.1
Reporter: Mark Waddle
 Attachments: SOLR-2199.patch


 Database servers can return multiple result sets from a single statement. 
 This can be beneficial for indexing because it reduces the number of 
 connections and statements being executed against a database, therefore 
 reducing overhead. The JDBC Statement object supports reading multiple 
 ResultSets. Support should be added to the JdbcDataSource to take advantage 
 of this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.7.0) - Build # 1744 - Still Failing!

2014-08-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1744/
Java: 64bit/jdk1.7.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.schema.TestCloudSchemaless.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:53555/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:53555/collection1
at 
__randomizedtesting.SeedInfo.seed([110DA194ADBEDF60:90EB2F8CDAE1BF5C]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:559)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)
at 
org.apache.solr.schema.TestCloudSchemaless.doTest(TestCloudSchemaless.java:140)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

Re: Welcome Tomás Fernández Löbbe as Lucene/Solr committer!

2014-08-01 Thread Koji Sekiguchi

Welcome, Tomás!

Koji

(2014/08/01 0:50), Yonik Seeley wrote:

I'm pleased to announce that Tomás has accepted the PMC's invitation
to become a Lucene/Solr committer.

Tomás, it's tradition to introduce yourself with a little bio.

Congrats and Welcome!

-Yonik
http://heliosearch.org - native code faceting, facet functions,
sub-facets, off-heap data

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org





--
http://soleami.com/blog/comparing-document-classification-functions-of-lucene-and-mahout.html

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5860) Use Terms.getMin/Max to speed up range queries/filters

2014-08-01 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082107#comment-14082107
 ] 

Michael McCandless commented on LUCENE-5860:


bq. For numeric trie terms the maximum and minimum values are not quite right 
(because of the additional shift!=0 terms)? As far as I remember, the min/max 
value is the found by binary search? Or is it now changed that min/max values 
come from the min/max shift=0 term? If thats the case I am fine, otherwise I 
think the binary search is more costly.

Hmm ... you are right: this is in fact doing a binary search for a numeric 
field (see NumericUtils), which is no good: the one disk seek we save is 
replaced by several!  So I agree: until we can pull the min/max for a numeric 
field w/o any disk seeks (which I think we should do), this is really not worth 
doing for numeric fields.

 Use Terms.getMin/Max to speed up range queries/filters
 --

 Key: LUCENE-5860
 URL: https://issues.apache.org/jira/browse/LUCENE-5860
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, 4.10

 Attachments: LUCENE-5860.patch


 As of LUCENE-5610, Lucene's Terms API now exposes min and max terms in
 each field.  I think we can use this in our term/numeric range
 query/filters to avoid visiting a given segment by detecting up front
 that the terms in the segment don't overlap with the query's range.
 Even though block tree avoids disk seeks in certain cases when the
 term cannot exist on-disk, I think this change would further avoid
 disk seeks in additional cases because the min/max term has
 more/different information than the in-memory FST terms index.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5866) Provide a way to disable the regular expressions in QueryParser/MultiFieldQueryParser

2014-08-01 Thread Tim Lebedkov (JIRA)
Tim Lebedkov created LUCENE-5866:


 Summary: Provide a way to disable the regular expressions in 
QueryParser/MultiFieldQueryParser
 Key: LUCENE-5866
 URL: https://issues.apache.org/jira/browse/LUCENE-5866
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/queryparser
Affects Versions: 4.5.1
Reporter: Tim Lebedkov


we would like to use the default parser, but disable the regular expressions



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5867) Add BooleanSimilarity

2014-08-01 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5867?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5867:


Attachment: LUCENE-5867.patch

Here's the start to a patch. No tests yet.

 Add BooleanSimilarity
 -

 Key: LUCENE-5867
 URL: https://issues.apache.org/jira/browse/LUCENE-5867
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Robert Muir
 Attachments: LUCENE-5867.patch


 This can be used when the user doesn't want tf/idf scoring for some reason. 
 The idea is that the score is just query_time_boost * index_time_boost, no 
 queryNorm/IDF/TF/lengthNorm...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5867) Add BooleanSimilarity

2014-08-01 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5867:
---

 Summary: Add BooleanSimilarity
 Key: LUCENE-5867
 URL: https://issues.apache.org/jira/browse/LUCENE-5867
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Robert Muir
 Attachments: LUCENE-5867.patch

This can be used when the user doesn't want tf/idf scoring for some reason. The 
idea is that the score is just query_time_boost * index_time_boost, no 
queryNorm/IDF/TF/lengthNorm...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-4x-Linux-Java7-64-test-only - Build # 27713 - Failure!

2014-08-01 Thread Robert Muir
I repeated full core tests with the same master seed / # jvms / JVM
version / G1GC over 200 times: no crashes.

I think its a bug triggered by G1GC.

On Thu, Jul 31, 2014 at 6:05 PM, Uwe Schindler u...@thetaphi.de wrote:
 Another one for live demonstration @ http://goo.gl/aruRmO ?

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: Robert Muir [mailto:rcm...@gmail.com]
 Sent: Thursday, July 31, 2014 10:50 PM
 To: dev@lucene.apache.org
 Cc: Simon Willnauer
 Subject: Re: [JENKINS] Lucene-4x-Linux-Java7-64-test-only - Build # 27713 -
 Failure!

 It crashes at or before this test:

 Suite: org.apache.lucene.index.TestNRTReaderWithThreads

 I downloaded update 65, i will try to reproduce it.


 On Thu, Jul 31, 2014 at 3:59 PM,  buil...@flonkings.com wrote:
  Build:
  builds.flonkings.com/job/Lucene-4x-Linux-Java7-64-test-only/27713/
 
  All tests passed
 
  Build Log:
  [...truncated 625 lines...]
 [junit4] JVM J0: stdout was not empty, see:
 /var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-64-test-
 only/checkout/lucene/build/core/test/temp/junit4-J0-
 20140731_215329_011.sysout
 [junit4]  JVM J0: stdout (verbatim) 
 [junit4] #
 [junit4] # A fatal error has been detected by the Java Runtime
 Environment:
 [junit4] #
 [junit4] #  SIGSEGV (0xb) at pc=0x7fe8b1019f60, pid=21427,
 tid=140636986857216
 [junit4] #
 [junit4] # JRE version: Java(TM) SE Runtime Environment (7.0_65-b17)
 (build 1.7.0_65-b17)
 [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed
 mode linux-amd64 compressed oops)
 [junit4] # Problematic frame:
 [junit4] # j
 org.apache.lucene.codecs.compressing.CompressingTermVectorsWriter.add
 Prox(ILorg/apache/lucene/store/DataInput;Lorg/apache/lucene/store/DataI
 nput;)V+284
 [junit4] #
 [junit4] # Failed to write core dump. Core dumps have been disabled. To
 enable core dumping, try ulimit -c unlimited before starting Java again
 [junit4] #
 [junit4] # An error report file with more information is saved as:
 [junit4] # /var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-64-test-
 only/checkout/lucene/build/core/test/J0/hs_err_pid21427.log
 [junit4] #
 [junit4] # If you would like to submit a bug report, please visit:
 [junit4] #   http://bugreport.sun.com/bugreport/crash.jsp
 [junit4] #
 [junit4]  JVM J0: EOF 
 
  [...truncated 992 lines...]
 [junit4] ERROR: JVM J0 ended with an exception, command line:
 /var/lib/jenkins/tools/hudson.model.JDK/Java_7_64bit_u65/jre/bin/java -
 Dtests.prefix=tests -Dtests.seed=3D30279F929DA404 -Xmx512M -
 Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false -
 Dtests.codec=random -Dtests.postingsformat=random -
 Dtests.docvaluesformat=random -Dtests.locale=random -
 Dtests.timezone=random -Dtests.directory=random -
 Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.10 -
 Dtests.cleanthreads=perMethod -
 Djava.util.logging.config.file=/var/lib/jenkins/workspace/Lucene-4x-Linux-
 Java7-64-test-only/checkout/lucene/tools/junit4/logging.properties -
 Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false -
 Dtests.slow=true -Dtests.asserts.gracious=false -Dtests.multiplier=1 -
 DtempDir=. -Djava.io.tmpdir=. -
 Djunit4.tempDir=/var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-64-test-
 only/checkout/lucene/build/core/test/temp -
 Dclover.db.dir=/var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-64-test-
 only/checkout/lucene/build/clover/db -
 Djava.security.manager=org.apache.lucene.util.TestSecurityManager -
 Djava.security.policy=/var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-64-
 test-only/checkout/lucene/tools/junit4/tests.policy -Dlucene.version=4.10-
 SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1 -
 Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory -
 Djava.awt.headless=true -Djdk.map.althashing.threshold=0 -
 Dtests.leaveTemporary=false -Dtests.filterstacks=true -Dfile.encoding=UTF-
 8 -classpath /var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-64-test-
 only/checkout/lucene/build/test-
 framework/classes/java:/var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-
 64-test-
 only/checkout/lucene/build/codecs/classes/java:/var/lib/jenkins/workspace
 /Lucene-4x-Linux-Java7-64-test-only/checkout/lucene/test-
 framework/lib/junit-4.10.jar:/var/lib/jenkins/workspace/Lucene-4x-Linux-
 Java7-64-test-only/checkout/lucene/test-framework/lib/randomizedtesting-
 runner-2.1.6.jar:/var/lib/jenkins/workspace/Lucene-4x-Linux-Java7-64-test-
 only/checkout/lucene/build/core/classes/java:/var/lib/jenkins/workspace/L
 ucene-4x-Linux-Java7-64-test-
 only/checkout/lucene/build/core/classes/test:/var/lib/jenkins/tools/hudson
 .tasks.Ant_AntInstallation/Ant_1.8.3/lib/ant-
 launcher.jar:/var/lib/jenkins/.ant/lib/apache-rat-tasks-
 0.8.jar:/var/lib/jenkins/.ant/lib/ivy-2.2.0.jar:/var/lib/jenkins/.ant/lib/apache-
 

[jira] [Commented] (LUCENE-5867) Add BooleanSimilarity

2014-08-01 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082122#comment-14082122
 ] 

Mikhail Khludnev commented on LUCENE-5867:
--

People often want coord-factor also.

 Add BooleanSimilarity
 -

 Key: LUCENE-5867
 URL: https://issues.apache.org/jira/browse/LUCENE-5867
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Robert Muir
 Attachments: LUCENE-5867.patch


 This can be used when the user doesn't want tf/idf scoring for some reason. 
 The idea is that the score is just query_time_boost * index_time_boost, no 
 queryNorm/IDF/TF/lengthNorm...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5867) Add BooleanSimilarity

2014-08-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082124#comment-14082124
 ] 

Robert Muir commented on LUCENE-5867:
-

This similarity is already a coordinate-level match, because it ignores TF etc 
completely and scores 1 for each matching term.

 Add BooleanSimilarity
 -

 Key: LUCENE-5867
 URL: https://issues.apache.org/jira/browse/LUCENE-5867
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Robert Muir
 Attachments: LUCENE-5867.patch


 This can be used when the user doesn't want tf/idf scoring for some reason. 
 The idea is that the score is just query_time_boost * index_time_boost, no 
 queryNorm/IDF/TF/lengthNorm...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6309) AsyncMigrateRouteKeyTest failure

2014-08-01 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-6309:
---

 Summary: AsyncMigrateRouteKeyTest failure
 Key: SOLR-6309
 URL: https://issues.apache.org/jira/browse/SOLR-6309
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.10


Just saw this on my private jenkins:

{code}
Error Message

Task 20140128 not found in completed tasks. expected:found 20140128 in 
[completed] tasks but was:found 20140128 in [running] tasks
Stacktrace

org.junit.ComparisonFailure: Task 20140128 not found in completed tasks. 
expected:found 20140128 in [completed] tasks but was:found 20140128 in 
[running] tasks
at 
__randomizedtesting.SeedInfo.seed([531157FE6D39FBB:84D79B67918CFF87]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.solr.cloud.AsyncMigrateRouteKeyTest.checkAsyncRequestForCompletion(AsyncMigrateRouteKeyTest.java:61)
at 
org.apache.solr.cloud.AsyncMigrateRouteKeyTest.invokeMigrateApi(AsyncMigrateRouteKeyTest.java:78)
at 
org.apache.solr.cloud.MigrateRouteKeyTest.multipleShardMigrateTest(MigrateRouteKeyTest.java:202)
at 
org.apache.solr.cloud.AsyncMigrateRouteKeyTest.doTest(AsyncMigrateRouteKeyTest.java:46)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6309) AsyncMigrateRouteKeyTest failure

2014-08-01 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082161#comment-14082161
 ] 

Shalin Shekhar Mangar commented on SOLR-6309:
-

Same problem as the other async test. It polls for status every 1 second but 
waits for just 20 seconds for a migrate command to finish and gives up. That is 
too low. I've seen a simple shard split take more than 90 seconds on jenkins so 
this should wait for at least a couple of minutes before giving up.

 AsyncMigrateRouteKeyTest failure
 

 Key: SOLR-6309
 URL: https://issues.apache.org/jira/browse/SOLR-6309
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.10


 Just saw this on my private jenkins:
 {code}
 Error Message
 Task 20140128 not found in completed tasks. expected:found 20140128 in 
 [completed] tasks but was:found 20140128 in [running] tasks
 Stacktrace
 org.junit.ComparisonFailure: Task 20140128 not found in completed tasks. 
 expected:found 20140128 in [completed] tasks but was:found 20140128 in 
 [running] tasks
   at 
 __randomizedtesting.SeedInfo.seed([531157FE6D39FBB:84D79B67918CFF87]:0)
   at org.junit.Assert.assertEquals(Assert.java:125)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.checkAsyncRequestForCompletion(AsyncMigrateRouteKeyTest.java:61)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.invokeMigrateApi(AsyncMigrateRouteKeyTest.java:78)
   at 
 org.apache.solr.cloud.MigrateRouteKeyTest.multipleShardMigrateTest(MigrateRouteKeyTest.java:202)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.doTest(AsyncMigrateRouteKeyTest.java:46)
   at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5867) Add BooleanSimilarity

2014-08-01 Thread Tommaso Teofili (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082162#comment-14082162
 ] 

Tommaso Teofili commented on LUCENE-5867:
-

+1

 Add BooleanSimilarity
 -

 Key: LUCENE-5867
 URL: https://issues.apache.org/jira/browse/LUCENE-5867
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Robert Muir
 Attachments: LUCENE-5867.patch


 This can be used when the user doesn't want tf/idf scoring for some reason. 
 The idea is that the score is just query_time_boost * index_time_boost, no 
 queryNorm/IDF/TF/lengthNorm...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-trunk-Linux-Java7-64-test-only - Build # 91441 - Failure!

2014-08-01 Thread builder
Build: builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/91441/

All tests passed

Build Log:
[...truncated 833 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java7-64-test-only/checkout/lucene/build/core/test/temp/junit4-J1-20140801_134930_778.sysout
   [junit4]  JVM J1: stdout (verbatim) 
   [junit4] #
   [junit4] # A fatal error has been detected by the Java Runtime Environment:
   [junit4] #
   [junit4] #  SIGSEGV (0xb) at pc=0x7f36a86f0010, pid=9111, 
tid=139872414435072
   [junit4] #
   [junit4] # JRE version: Java(TM) SE Runtime Environment (7.0_65-b17) (build 
1.7.0_65-b17)
   [junit4] # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.65-b04 mixed mode 
linux-amd64 compressed oops)
   [junit4] # Problematic frame:
   [junit4] # V  [libjvm.so+0xdfe010]  _fini+0x43b7a8
   [junit4] #
   [junit4] # Failed to write core dump. Core dumps have been disabled. To 
enable core dumping, try ulimit -c unlimited before starting Java again
   [junit4] #
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java7-64-test-only/checkout/lucene/build/core/test/J1/hs_err_pid9111.log
   [junit4] Compiled method (c2)  195857 2513   ! 
org.apache.lucene.index.IndexWriter::updateDocument (108 bytes)
   [junit4]  total in heap  [0x7f369d39b450,0x7f369d39f938] = 17640
   [junit4]  relocation [0x7f369d39b570,0x7f369d39b838] = 712
   [junit4]  main code  [0x7f369d39b840,0x7f369d39cec0] = 5760
   [junit4]  stub code  [0x7f369d39cec0,0x7f369d39d088] = 456
   [junit4]  oops   [0x7f369d39d088,0x7f369d39d228] = 416
   [junit4]  scopes data[0x7f369d39d228,0x7f369d39ee98] = 7280
   [junit4]  scopes pcs [0x7f369d39ee98,0x7f369d39f488] = 1520
   [junit4]  dependencies   [0x7f369d39f488,0x7f369d39f4b0] = 40
   [junit4]  handler table  [0x7f369d39f4b0,0x7f369d39f888] = 984
   [junit4]  nul chk table  [0x7f369d39f888,0x7f369d39f938] = 176
   [junit4] Compiled method (c2)  195857 2513   ! 
org.apache.lucene.index.IndexWriter::updateDocument (108 bytes)
   [junit4]  total in heap  [0x7f369d39b450,0x7f369d39f938] = 17640
   [junit4]  relocation [0x7f369d39b570,0x7f369d39b838] = 712
   [junit4]  main code  [0x7f369d39b840,0x7f369d39cec0] = 5760
   [junit4]  stub code  [0x7f369d39cec0,0x7f369d39d088] = 456
   [junit4]  oops   [0x7f369d39d088,0x7f369d39d228] = 416
   [junit4]  scopes data[0x7f369d39d228,0x7f369d39ee98] = 7280
   [junit4]  scopes pcs [0x7f369d39ee98,0x7f369d39f488] = 1520
   [junit4]  dependencies   [0x7f369d39f488,0x7f369d39f4b0] = 40
   [junit4]  handler table  [0x7f369d39f4b0,0x7f369d39f888] = 984
   [junit4]  nul chk table  [0x7f369d39f888,0x7f369d39f938] = 176
   [junit4] Compiled method (c2)  195858 2513   ! 
org.apache.lucene.index.IndexWriter::updateDocument (108 bytes)
   [junit4]  total in heap  [0x7f369d39b450,0x7f369d39f938] = 17640
   [junit4]  relocation [0x7f369d39b570,0x7f369d39b838] = 712
   [junit4]  main code  [0x7f369d39b840,0x7f369d39cec0] = 5760
   [junit4]  stub code  [0x7f369d39cec0,0x7f369d39d088] = 456
   [junit4]  oops   [0x7f369d39d088,0x7f369d39d228] = 416
   [junit4]  scopes data[0x7f369d39d228,0x7f369d39ee98] = 7280
   [junit4]  scopes pcs [0x7f369d39ee98,0x7f369d39f488] = 1520
   [junit4]  dependencies   [0x7f369d39f488,0x7f369d39f4b0] = 40
   [junit4]  handler table  [0x7f369d39f4b0,0x7f369d39f888] = 984
   [junit4]  nul chk table  [0x7f369d39f888,0x7f369d39f938] = 176
   [junit4] #
   [junit4] # If you would like to submit a bug report, please visit:
   [junit4] #   http://bugreport.sun.com/bugreport/crash.jsp
   [junit4] #
   [junit4]  JVM J1: EOF 

[...truncated 726 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/var/lib/jenkins/tools/hudson.model.JDK/Java_7_64bit_u65/jre/bin/java 
-Dtests.prefix=tests -Dtests.seed=7AAF8DDA1EE6B681 -Xmx512M -Dtests.iters= 
-Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random 
-Dtests.postingsformat=random -Dtests.docvaluesformat=random 
-Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random 
-Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=5.0 
-Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/var/lib/jenkins/workspace/Lucene-trunk-Linux-Java7-64-test-only/checkout/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts.gracious=false -Dtests.multiplier=1 
-DtempDir=./temp -Djava.io.tmpdir=./temp 

[jira] [Updated] (SOLR-6309) AsyncMigrateRouteKeyTest failure

2014-08-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6309:


Attachment: SOLR-6309.patch

Increase max wait to 2 minutes.

 AsyncMigrateRouteKeyTest failure
 

 Key: SOLR-6309
 URL: https://issues.apache.org/jira/browse/SOLR-6309
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6309.patch


 Just saw this on my private jenkins:
 {code}
 Error Message
 Task 20140128 not found in completed tasks. expected:found 20140128 in 
 [completed] tasks but was:found 20140128 in [running] tasks
 Stacktrace
 org.junit.ComparisonFailure: Task 20140128 not found in completed tasks. 
 expected:found 20140128 in [completed] tasks but was:found 20140128 in 
 [running] tasks
   at 
 __randomizedtesting.SeedInfo.seed([531157FE6D39FBB:84D79B67918CFF87]:0)
   at org.junit.Assert.assertEquals(Assert.java:125)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.checkAsyncRequestForCompletion(AsyncMigrateRouteKeyTest.java:61)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.invokeMigrateApi(AsyncMigrateRouteKeyTest.java:78)
   at 
 org.apache.solr.cloud.MigrateRouteKeyTest.multipleShardMigrateTest(MigrateRouteKeyTest.java:202)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.doTest(AsyncMigrateRouteKeyTest.java:46)
   at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6309) AsyncMigrateRouteKeyTest failure

2014-08-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082178#comment-14082178
 ] 

ASF subversion and git services commented on SOLR-6309:
---

Commit 1615075 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1615075 ]

SOLR-6309: Increase timeouts for AsyncMigrateRouteKeyTest

 AsyncMigrateRouteKeyTest failure
 

 Key: SOLR-6309
 URL: https://issues.apache.org/jira/browse/SOLR-6309
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6309.patch


 Just saw this on my private jenkins:
 {code}
 Error Message
 Task 20140128 not found in completed tasks. expected:found 20140128 in 
 [completed] tasks but was:found 20140128 in [running] tasks
 Stacktrace
 org.junit.ComparisonFailure: Task 20140128 not found in completed tasks. 
 expected:found 20140128 in [completed] tasks but was:found 20140128 in 
 [running] tasks
   at 
 __randomizedtesting.SeedInfo.seed([531157FE6D39FBB:84D79B67918CFF87]:0)
   at org.junit.Assert.assertEquals(Assert.java:125)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.checkAsyncRequestForCompletion(AsyncMigrateRouteKeyTest.java:61)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.invokeMigrateApi(AsyncMigrateRouteKeyTest.java:78)
   at 
 org.apache.solr.cloud.MigrateRouteKeyTest.multipleShardMigrateTest(MigrateRouteKeyTest.java:202)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.doTest(AsyncMigrateRouteKeyTest.java:46)
   at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6309) AsyncMigrateRouteKeyTest failure

2014-08-01 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6309.
-

Resolution: Fixed

 AsyncMigrateRouteKeyTest failure
 

 Key: SOLR-6309
 URL: https://issues.apache.org/jira/browse/SOLR-6309
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6309.patch


 Just saw this on my private jenkins:
 {code}
 Error Message
 Task 20140128 not found in completed tasks. expected:found 20140128 in 
 [completed] tasks but was:found 20140128 in [running] tasks
 Stacktrace
 org.junit.ComparisonFailure: Task 20140128 not found in completed tasks. 
 expected:found 20140128 in [completed] tasks but was:found 20140128 in 
 [running] tasks
   at 
 __randomizedtesting.SeedInfo.seed([531157FE6D39FBB:84D79B67918CFF87]:0)
   at org.junit.Assert.assertEquals(Assert.java:125)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.checkAsyncRequestForCompletion(AsyncMigrateRouteKeyTest.java:61)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.invokeMigrateApi(AsyncMigrateRouteKeyTest.java:78)
   at 
 org.apache.solr.cloud.MigrateRouteKeyTest.multipleShardMigrateTest(MigrateRouteKeyTest.java:202)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.doTest(AsyncMigrateRouteKeyTest.java:46)
   at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6309) AsyncMigrateRouteKeyTest failure

2014-08-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082179#comment-14082179
 ] 

ASF subversion and git services commented on SOLR-6309:
---

Commit 1615076 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1615076 ]

SOLR-6309: Increase timeouts for AsyncMigrateRouteKeyTest

 AsyncMigrateRouteKeyTest failure
 

 Key: SOLR-6309
 URL: https://issues.apache.org/jira/browse/SOLR-6309
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud, Tests
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-6309.patch


 Just saw this on my private jenkins:
 {code}
 Error Message
 Task 20140128 not found in completed tasks. expected:found 20140128 in 
 [completed] tasks but was:found 20140128 in [running] tasks
 Stacktrace
 org.junit.ComparisonFailure: Task 20140128 not found in completed tasks. 
 expected:found 20140128 in [completed] tasks but was:found 20140128 in 
 [running] tasks
   at 
 __randomizedtesting.SeedInfo.seed([531157FE6D39FBB:84D79B67918CFF87]:0)
   at org.junit.Assert.assertEquals(Assert.java:125)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.checkAsyncRequestForCompletion(AsyncMigrateRouteKeyTest.java:61)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.invokeMigrateApi(AsyncMigrateRouteKeyTest.java:78)
   at 
 org.apache.solr.cloud.MigrateRouteKeyTest.multipleShardMigrateTest(MigrateRouteKeyTest.java:202)
   at 
 org.apache.solr.cloud.AsyncMigrateRouteKeyTest.doTest(AsyncMigrateRouteKeyTest.java:46)
   at 
 org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6308) Remove filtered documents from elevated set

2014-08-01 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082180#comment-14082180
 ] 

Joel Bernstein commented on SOLR-6308:
--

David, it's possible that what you're seeing is noted in this ticket: 
SOLR-6066. So this is an issue with the CollapsingQParserPlugin. 






 Remove filtered documents from elevated set
 ---

 Key: SOLR-6308
 URL: https://issues.apache.org/jira/browse/SOLR-6308
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.9
Reporter: David Boychuck
 Fix For: 4.10

   Original Estimate: 8h
  Remaining Estimate: 8h

 I would like to add a parameter to the Query Elevation Component. Something 
 like showFiltered=false where any results that have been filtered from the 
 result set with the fq parameter will no longer be elevated.
 as an example if I had two documents returned in a query
 {code}
 id=A
 field_1=foo
 id=B
 field_1=bar
 {code}
 I would want the following query to yield the shown results
 {code}
 /solr/elevate?q=*fq=field_1:barelevate=trueelevateIds=A
 id=B
 field_1=bar
 {code}
 id A is removed from the results because it is not contained in the filtered 
 results even though it is elevated. It would be nice if we could pass an 
 optional parameter like showFiltered=false where any results that have been 
 filtered from the result set with the fq parameter will no longer be 
 elevated. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6310) create a TypeQueryParser to query each token on it's matching field type

2014-08-01 Thread Manuel Lenormand (JIRA)
Manuel Lenormand created SOLR-6310:
--

 Summary: create a TypeQueryParser to query each token on it's 
matching field type
 Key: SOLR-6310
 URL: https://issues.apache.org/jira/browse/SOLR-6310
 Project: Solr
  Issue Type: Improvement
  Components: query parsers
Affects Versions: 4.9
Reporter: Manuel Lenormand
Priority: Minor
 Fix For: 5.0, 4.10


Indexed documents frequently contain different types in different field, e.g 
emails, telephone numbers, ips etc. The fields may have been extracted from the 
content field or originally structured that way.

We should propose a queryParser that recognizes the query token type (eg. 
regex) and implicitly reformulate the query to run against the matching field 
only. That would make a good performance boost in case the query is running on 
a catch them all field and a more adapted analyze for the different types.
 It would also avoid the idf drift that occurs on an above catch them all 
field.

A workaround could be using the type token filter with the matching type 
whitelist and querying all the different field types with edismax's qf param.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5865) create fork of analyzers module without Version

2014-08-01 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082233#comment-14082233
 ] 

Yonik Seeley commented on LUCENE-5865:
--

Please... let's not duplicate all of this stuff.

 create fork of analyzers module without Version
 ---

 Key: LUCENE-5865
 URL: https://issues.apache.org/jira/browse/LUCENE-5865
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir
 Fix For: 4.10

 Attachments: LUCENE-5865.patch


 Since this is obviously too controversial to fix, we can just add 
 *alternatives* that don't have this messy api, e.g. under analyzers-simple.
 These won't have Version. They don't need factories, because they are 
 actually geared at being usable for lucene users.
 Once nice thing is, this way the problem can be fixed in 4.10



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5473) Split clusterstate.json per collection and watch states selectively

2014-08-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082253#comment-14082253
 ] 

Mark Miller commented on SOLR-5473:
---

Sorry, another week of craziness on my end. I'll dedicate my Saturday work to 
this and get a new patch up over the weekend. Unless I see something I really 
have a big issue with, I'll mainly concentrate on the documentation I think we 
should have for other devs and we can increment on most anything else after 
committing.

 Split clusterstate.json per collection and watch states selectively 
 

 Key: SOLR-5473
 URL: https://issues.apache.org/jira/browse/SOLR-5473
 Project: Solr
  Issue Type: Sub-task
  Components: SolrCloud
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-5473-74 .patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74.patch, 
 SOLR-5473-74.patch, SOLR-5473-74.patch, SOLR-5473-74_POC.patch, 
 SOLR-5473-configname-fix.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, SOLR-5473.patch, 
 SOLR-5473_undo.patch, ec2-23-20-119-52_solr.log, ec2-50-16-38-73_solr.log


 As defined in the parent issue, store the states of each collection under 
 /collections/collectionname/state.json node



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6275) Improve accuracy of QTime reporting

2014-08-01 Thread Ramkumar Aiyengar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082267#comment-14082267
 ] 

Ramkumar Aiyengar commented on SOLR-6275:
-

[~markrmil...@gmail.com], do those numbers sound reasonable to you?

 Improve accuracy of QTime reporting
 ---

 Key: SOLR-6275
 URL: https://issues.apache.org/jira/browse/SOLR-6275
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Ramkumar Aiyengar
Priority: Minor

 Currently, {{QTime}} uses {{currentTimeMillis}} instead of {{nanoTime}} and 
 hence is not suitable for time measurements. Further, it is really started 
 after all the dispatch logic in {{SolrDispatchFilter}} (same with the top 
 level timing reported by {{debug=timing}}) which may or may not be expensive, 
 and hence may not fully represent the time taken by the search. This is to 
 remedy both cases.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6275) Improve accuracy of QTime reporting

2014-08-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082268#comment-14082268
 ] 

Mark Miller commented on SOLR-6275:
---

Yeah, I dont see a real issue with it.

 Improve accuracy of QTime reporting
 ---

 Key: SOLR-6275
 URL: https://issues.apache.org/jira/browse/SOLR-6275
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Ramkumar Aiyengar
Priority: Minor

 Currently, {{QTime}} uses {{currentTimeMillis}} instead of {{nanoTime}} and 
 hence is not suitable for time measurements. Further, it is really started 
 after all the dispatch logic in {{SolrDispatchFilter}} (same with the top 
 level timing reported by {{debug=timing}}) which may or may not be expensive, 
 and hence may not fully represent the time taken by the search. This is to 
 remedy both cases.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5664) /browse: Show all highlighting fragments

2014-08-01 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-5664:
---

Fix Version/s: (was: 4.9)
   4.10

 /browse: Show all highlighting fragments
 

 Key: SOLR-5664
 URL: https://issues.apache.org/jira/browse/SOLR-5664
 Project: Solr
  Issue Type: Bug
  Components: contrib - Velocity
Reporter: Jan Høydahl
Assignee: Erik Hatcher
 Fix For: 5.0, 4.10


 Currently if there are more highlighting fragments for the features field 
 in example, only the first one is redered in the /browse GUI



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5664) /browse: Show all highlighting fragments

2014-08-01 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-5664:
--

Assignee: Erik Hatcher

 /browse: Show all highlighting fragments
 

 Key: SOLR-5664
 URL: https://issues.apache.org/jira/browse/SOLR-5664
 Project: Solr
  Issue Type: Bug
  Components: contrib - Velocity
Reporter: Jan Høydahl
Assignee: Erik Hatcher
 Fix For: 5.0, 4.10


 Currently if there are more highlighting fragments for the features field 
 in example, only the first one is redered in the /browse GUI



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4702) Velocity templates not rendering spellcheck suggestions correctly

2014-08-01 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-4702:
---

Fix Version/s: 4.10
   5.0

 Velocity templates not rendering spellcheck suggestions correctly
 -

 Key: SOLR-4702
 URL: https://issues.apache.org/jira/browse/SOLR-4702
 Project: Solr
  Issue Type: Bug
  Components: contrib - Velocity
Affects Versions: 4.2
Reporter: Mark Bennett
Assignee: Erik Hatcher
 Fix For: 5.0, 4.10

 Attachments: SOLR-4702.patch, SOLR-4702.patch, SOLR-4702.patch


 The spellcheck links, AKA Did you mean, aren't rendered correctly.
 Instead of just having the corrected words, they have some .toString 
 gibberish because the object being serialized is too high up in the tree.
 This breaks both the link text displayed to the user, and the href used for 
 the anchor tag.
 Example:
 Search for electronicss OR monitor and you get:
 Did you mean {collationQuery=electronics OR 
 monitor,hits=14,misspellingsAndCorrections={electronicss=electronics,monitor=monitor}}?
 But you should just see:
 Did you mean electronics OR monitor?   (with hyperlinked electronics OR 
 monitor)
 The actual query submitted by those links is similarly broken.  Possibly the 
 templates were developed before collation was added and/or enabled by default.
 To see this bug at all with the example configs and docs you'll need to have 
 applied SOLR-4680 or SOLR-4681 against 4.2 or trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-4512) /browse GUI: Extra URL params should be sticky

2014-08-01 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-4512:
--

Assignee: Erik Hatcher

 /browse GUI: Extra URL params should be sticky
 --

 Key: SOLR-4512
 URL: https://issues.apache.org/jira/browse/SOLR-4512
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Velocity
Reporter: Jan Høydahl
Assignee: Erik Hatcher

 Sometimes you want to experiment with extra query parms in Velocity 
 /browse. But if you modify the URL it will be forgotten once you click 
 anything in the GUI.
 We need a way to make them sticky, either generate all the links based on the 
 current actual URL, or add a checkbox which reveals a new input field where 
 you can write all the extra params you want appended



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: SolrCloud on HDFS empty tlog hence doesn't replay after Solr process crash and restart

2014-08-01 Thread Tom Chen
I wonder if there's any update on this. Should we create a JIRA to track
this?

Thanks,
Tom


On Mon, Jul 21, 2014 at 12:18 PM, Mark Miller markrmil...@gmail.com wrote:

 It’s on my list to investigate.

 --
 Mark Miller
 about.me/markrmiller

 On July 21, 2014 at 10:26:09 AM, Tom Chen (tomchen1...@gmail.com) wrote:
  Any thought about this issue: Solr on HDFS generate empty tlog when add
  documents without commit.
 
  Thanks,
  Tom
 
 
  On Fri, Jul 18, 2014 at 12:21 PM, Tom Chen wrote:
 
   Hi,
  
   This seems a bug for Solr running on HDFS.
  
   Reproduce steps:
   1) Setup Solr to run on HDFS like this:
  
   java -Dsolr.directoryFactory=HdfsDirectoryFactory
   -Dsolr.lock.type=hdfs
   -Dsolr.hdfs.home=hdfs://host:port/path
  
   For the purpose of this testing, turn off the default auto commit in
   solrconfig.xml, i.e. comment out autoCommit like this:
  
  
   2) Add a document without commit:
   curl http://localhost:8983/solr/collection1/update?commit=false; -H
   Content-type:text/xml; charset=utf-8 --data-binary @solr.xml
  
   3) Solr generate empty tlog file (0 file size, the last one ends with
 6):
   [hadoop@hdtest042 exampledocs]$ hadoop fs -ls
   /path/collection1/core_node1/data/tlog
   Found 5 items
   -rw-r--r-- 1 hadoop hadoop 667 2014-07-18 08:47
   /path/collection1/core_node1/data/tlog/tlog.001
   -rw-r--r-- 1 hadoop hadoop 67 2014-07-18 08:47
   /path/collection1/core_node1/data/tlog/tlog.003
   -rw-r--r-- 1 hadoop hadoop 667 2014-07-18 08:47
   /path/collection1/core_node1/data/tlog/tlog.004
   -rw-r--r-- 1 hadoop hadoop 0 2014-07-18 09:02
   /path/collection1/core_node1/data/tlog/tlog.005
   -rw-r--r-- 1 hadoop hadoop 0 2014-07-18 09:02
   /path/collection1/core_node1/data/tlog/tlog.006
  
   4) Simulate Solr crash by killing the process with -9 option.
  
   5) restart the Solr process. Observation is that uncommitted document
 are
   not replayed, files in tlog directory are cleaned up. Hence uncommitted
   document(s) is lost.
  
   Am I missing anything or this is a bug?
  
   BTW, additional observations:
   a) If in step 4) Solr is stopped gracefully (i.e. without -9 option),
   non-empty tlog file is geneated and after re-starting Solr, uncommitted
   document is replayed as expected.
  
   b) If Solr doesn't run on HDFS (i.e. on local file system), this issue
 is
   not observed either.
  
   Thanks,
   Tom
  
 


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Assigned] (SOLR-3711) Velocity: Break or truncate long strings in facet output

2014-08-01 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-3711:
--

Assignee: Erik Hatcher

 Velocity: Break or truncate long strings in facet output
 

 Key: SOLR-3711
 URL: https://issues.apache.org/jira/browse/SOLR-3711
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Reporter: Jan Høydahl
Assignee: Erik Hatcher
  Labels: /browse
 Fix For: 5.0


 In Solritas /browse GUI, if facets contain very long strings (such as 
 content-type tend to do), currently the too long text runs over the main 
 column and it is not pretty.
 Perhaps inserting a Soft Hyphen shy; 
 (http://en.wikipedia.org/wiki/Soft_hyphen) at position N in very long terms 
 is a solution?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-2168) Velocity facet output for facet missing

2014-08-01 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-2168:
--

Assignee: Erik Hatcher

 Velocity facet output for facet missing
 ---

 Key: SOLR-2168
 URL: https://issues.apache.org/jira/browse/SOLR-2168
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Affects Versions: 3.1
Reporter: Peter Wolanin
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-2168.patch


 If I add fact.missing to the facet params for a field, the Veolcity output 
 has in the facet list:
 $facet.name (9220)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2168) Velocity facet output for facet missing

2014-08-01 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-2168:
---

Fix Version/s: 4.10
   5.0

 Velocity facet output for facet missing
 ---

 Key: SOLR-2168
 URL: https://issues.apache.org/jira/browse/SOLR-2168
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Affects Versions: 3.1
Reporter: Peter Wolanin
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0, 4.10

 Attachments: SOLR-2168.patch


 If I add fact.missing to the facet params for a field, the Veolcity output 
 has in the facet list:
 $facet.name (9220)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-3067) Missing Velocity Template for /browse request handler.

2014-08-01 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-3067:
--

Assignee: Erik Hatcher

 Missing Velocity Template for /browse request handler.
 --

 Key: SOLR-3067
 URL: https://issues.apache.org/jira/browse/SOLR-3067
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 3.5
Reporter: Tom Hill
Assignee: Erik Hatcher
Priority: Trivial
 Fix For: 5.0, 4.10


 If you add group=ongroup.field=inStock to the URL in the /browse 
 requesthandler, it throws a 500 error, due to a missing hitGrouped.vm file. 
 This works correctly in trunk. Copying hitGrouped.vm from 4.0 to 3.5 prevents 
 the error, although some of the other grouping support still isn't present.
 One could just remove the #parse(hitGrouped.vm) from browse.vm, and avoid the 
 error, but it's probably about as easy to backport it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3067) Missing Velocity Template for /browse request handler.

2014-08-01 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-3067:
---

Fix Version/s: 4.10
   5.0

 Missing Velocity Template for /browse request handler.
 --

 Key: SOLR-3067
 URL: https://issues.apache.org/jira/browse/SOLR-3067
 Project: Solr
  Issue Type: Bug
  Components: web gui
Affects Versions: 3.5
Reporter: Tom Hill
Assignee: Erik Hatcher
Priority: Trivial
 Fix For: 5.0, 4.10


 If you add group=ongroup.field=inStock to the URL in the /browse 
 requesthandler, it throws a 500 error, due to a missing hitGrouped.vm file. 
 This works correctly in trunk. Copying hitGrouped.vm from 4.0 to 3.5 prevents 
 the error, although some of the other grouping support still isn't present.
 One could just remove the #parse(hitGrouped.vm) from browse.vm, and avoid the 
 error, but it's probably about as easy to backport it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5656) Add autoAddReplicas feature for shared file systems.

2014-08-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082353#comment-14082353
 ] 

Mark Miller commented on SOLR-5656:
---

bq. If being explicit isn't required, is csr-2 legal?

Yeah, it's legal.

 Add autoAddReplicas feature for shared file systems.
 

 Key: SOLR-5656
 URL: https://issues.apache.org/jira/browse/SOLR-5656
 Project: Solr
  Issue Type: New Feature
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-5656.patch, SOLR-5656.patch, SOLR-5656.patch, 
 SOLR-5656.patch


 When using HDFS, the Overseer should have the ability to reassign the cores 
 from failed nodes to running nodes.
 Given that the index and transaction logs are in hdfs, it's simple for 
 surviving hardware to take over serving cores for failed hardware.
 There are some tricky issues around having the Overseer handle this for you, 
 but seems a simple first pass is not too difficult.
 This will add another alternative to replicating both with hdfs and solr.
 It shouldn't be specific to hdfs, and would be an option for any shared file 
 system Solr supports.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5378) Suggester Version 2

2014-08-01 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082378#comment-14082378
 ] 

Shalin Shekhar Mangar commented on SOLR-5378:
-

Mark, the shards param is definitely optional but shards.qt will still be 
required.

 Suggester Version 2
 ---

 Key: SOLR-5378
 URL: https://issues.apache.org/jira/browse/SOLR-5378
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Areek Zillur
Assignee: Shalin Shekhar Mangar
 Fix For: 4.7, 5.0

 Attachments: SOLR-5378-maven-fix.patch, SOLR-5378.patch, 
 SOLR-5378.patch, SOLR-5378.patch, SOLR-5378.patch, SOLR-5378.patch, 
 SOLR-5378.patch, SOLR-5378.patch, SOLR-5378.patch, SOLR-5378.patch, 
 SOLR-5378.patch, SOLR-5378.patch, SOLR-5378.patch, SOLR-5378.patch


 The idea is to add a new Suggester Component that will eventually replace the 
 Suggester support through the SpellCheck Component.
 This will enable Solr to fully utilize the Lucene suggester module (along 
 with supporting most of the existing features) in the following ways:
- Dictionary pluggability (give users the option to choose the dictionary 
 implementation to use for their suggesters to consume)
- Map the suggester options/ suggester result format (e.g. support for 
 payload)
- The new Component will also allow us to have beefier Lookup support 
 instead of resorting to collation and such. (Move computation from query time 
 to index time) with more freedom
 In addition to this, this suggester version should also have support for 
 distributed support, which was awkward at best with the previous 
 implementation due to SpellCheck requirements.
 Config (index time) options:
   - name - name of suggester
   - sourceLocation - external file location (for file-based suggesters)
   - lookupImpl - type of lookup to use [default JaspellLookupFactory]
   - dictionaryImpl - type of dictionary to use (lookup input) [default
 (sourceLocation == null ? HighFrequencyDictionaryFactory : 
 FileDictionaryFactory)]
   - storeDir - location to store in-memory data structure in disk
   - buildOnCommit - command to build suggester for every commit
   - buildOnOptimize - command to build suggester for every optimize
 Query time options:
   - suggest.dictionary - name of suggester to use
   - suggest.count - number of suggestions to return
   - suggest.q - query to use for lookup
   - suggest.build - command to build the suggester
   - suggest.reload - command to reload the suggester
 Example query:
 {code}
 http://localhost:8983/solr/suggest?suggest.dictionary=mySuggestersuggest=truesuggest.build=truesuggest.q=elec
 {code}
 Distributed query:
 {code}
 http://localhost:7574/solr/suggest?suggest.dictionary=mySuggestersuggest=truesuggest.build=truesuggest.q=elecshards=localhost:8983/solr,localhost:7574/solrshards.qt=/suggest
 {code}
 Example Response:
 {code}
 response
   lst name=responseHeader
 int name=status0/int
 int name=QTime28/int
   /lst
   str name=commandbuild/str
   result name=response numFound=0 start=0 maxScore=0.0/
   lst name=suggest
 lst name=suggestions
   lst name=e
 int name=numFound2/int
 lst name=suggestion
   str name=termelectronics and computer1/str
   long name=weight2199/long
   str name=payload/
 /lst
 lst name=suggestion
   str name=termelectronics/str
   long name=weight649/long
   str name=payload/
 /lst
   /lst
 /lst
   /lst
 /response
 {code}
 Example config file:
- Using DocumentDictionary and FuzzySuggester 
-- Suggestion on product_name sorted by popularity with the additional 
 product_id in the payload
 {code}  
   searchComponent class=solr.SuggestComponent name=suggest
   lst name=suggester
 str name=namesuggest_fuzzy_doc_dict/str
 str name=lookupImplFuzzyLookupFactory/str
 str name=dictionaryImplDocumentDictionaryFactory/str
 str name=fieldproduct_name/str
 str name=weightFieldpopularity/str
 str name=payloadFieldproduct_id/str
 str name=storeDirsuggest_fuzzy_doc_dict_payload/str
 str name=suggestAnalyzerFieldTypetext/str
   /lst
 /searchComponent
 {code}
   - Using DocumentExpressionDictionary and FuzzySuggester
   -- Suggestion on product_name sorted by the expression ((price * 2) + 
 ln(popularity)) (where both price and popularity are fields in the document)
 {code}
 searchComponent class=solr.SuggestComponent name=suggest
   lst name=suggester
 str name=namesuggest_fuzzy_doc_expr_dict/str
 str name=dictionaryImplDocumentExpressionDictionaryFactory/str
 str name=lookupImplFuzzyLookupFactory/str
 str name=fieldproduct_name/str
 str name=weightExpression((price * 

[jira] [Commented] (SOLR-6306) Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)

2014-08-01 Thread Brett Hoerner (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082380#comment-14082380
 ] 

Brett Hoerner commented on SOLR-6306:
-

The index isn't very small, 3.1GB here: 
https://s3.amazonaws.com/massrel-pub/index.tar

checkindex output: https://s3.amazonaws.com/massrel-pub/checkindex.txt

 Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)
 -

 Key: SOLR-6306
 URL: https://issues.apache.org/jira/browse/SOLR-6306
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Brett Hoerner

 I have a SolrCloud cluster that has been running 4.9, I tried a 4.10 build as 
 a test and our indexing slowed to a crawl. I noticed the number of segments 
 (typically under 25) was up to 75 and climbing. In the logs it seems like 
 merges were failing with the following.
 Happy to provide any other info as needed.
 {code}
 15:06:24.624 [qtp1728790703-1634] ERROR o.a.solr.servlet.SolrDispatchFilter - 
 null:java.io.IOException: background merge hit exception: 
 _9n6s(4.9):C14802716/827586:delGen=97 _9nbh(4.9):C2903594/263527:delGen=100 
 _9no8(4.9):C2190621/20968:delGen=58 _9nak(4.9):C712244/78919:delGen=100 
 _9nfr(4.9):C686466/84576:delGen=97 
 _9ngy(4.9):C679031/90147:delGen=96 _9ncx(4.9):C641773/81866:delGen=99 
 _9nht(4.9):C415750/68337:delGen=94 _9mvj(4.9):C338961/39283:delGen=110 
 _9nje(4.9):C215123/41594:delGen=87 _9nmn(4.9):C156084/40673:delGen=69 
 _9nsk(4.9):C60958/7357:delGen=21 _9nka(4.9):C69625/22375:delGen=83 
 _9nrl(4.9):C27522/4326:delGen=31 _9nqr(4.
 9):C27216/7540:delGen=39 _9nqm(4.9):C24252/5597:delGen=40 
 _9nto(4.9):C10324/1882:delGen=10 _9ntx(4.9):C9581/1218:delGen=8 
 _9nts(4.9):C9731/1619:delGen=9 _9nv1(4.10):C3425 
 _9ntz(4.9):C1437/919:delGen=8 _9nu7(4.10):C1130/697:delGen=5 
 _9nuw(4.10):C611/218:delGen=2 _9nun(4.10):C625/308:delGen=3 
 _9nug(4.10):C828/489:delGen
 =4 into _9nv2 [maxNumSegments=1]
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1865)
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1801)
 at 
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:563)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1648)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1625)
 at 
 org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1963)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at org.eclipse.jetty.server.Server.handle(Server.java:368)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
 at 
 

[jira] [Commented] (SOLR-6306) Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)

2014-08-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082394#comment-14082394
 ] 

Robert Muir commented on SOLR-6306:
---

Brett, thank you very much. 

It seems the segments are already corrupt (and some have source=flush so they 
came directly from indexwriter), so i don't think its a merging bug, something 
way more wrong has happened. Moreover, the checksums pass, so its not like your 
disk went bad or something like that.

One thing that concerns me, is the java version is 1.8.0. 

But I will download your index for now and play and try to figure it out.

 Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)
 -

 Key: SOLR-6306
 URL: https://issues.apache.org/jira/browse/SOLR-6306
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Brett Hoerner

 I have a SolrCloud cluster that has been running 4.9, I tried a 4.10 build as 
 a test and our indexing slowed to a crawl. I noticed the number of segments 
 (typically under 25) was up to 75 and climbing. In the logs it seems like 
 merges were failing with the following.
 Happy to provide any other info as needed.
 {code}
 15:06:24.624 [qtp1728790703-1634] ERROR o.a.solr.servlet.SolrDispatchFilter - 
 null:java.io.IOException: background merge hit exception: 
 _9n6s(4.9):C14802716/827586:delGen=97 _9nbh(4.9):C2903594/263527:delGen=100 
 _9no8(4.9):C2190621/20968:delGen=58 _9nak(4.9):C712244/78919:delGen=100 
 _9nfr(4.9):C686466/84576:delGen=97 
 _9ngy(4.9):C679031/90147:delGen=96 _9ncx(4.9):C641773/81866:delGen=99 
 _9nht(4.9):C415750/68337:delGen=94 _9mvj(4.9):C338961/39283:delGen=110 
 _9nje(4.9):C215123/41594:delGen=87 _9nmn(4.9):C156084/40673:delGen=69 
 _9nsk(4.9):C60958/7357:delGen=21 _9nka(4.9):C69625/22375:delGen=83 
 _9nrl(4.9):C27522/4326:delGen=31 _9nqr(4.
 9):C27216/7540:delGen=39 _9nqm(4.9):C24252/5597:delGen=40 
 _9nto(4.9):C10324/1882:delGen=10 _9ntx(4.9):C9581/1218:delGen=8 
 _9nts(4.9):C9731/1619:delGen=9 _9nv1(4.10):C3425 
 _9ntz(4.9):C1437/919:delGen=8 _9nu7(4.10):C1130/697:delGen=5 
 _9nuw(4.10):C611/218:delGen=2 _9nun(4.10):C625/308:delGen=3 
 _9nug(4.10):C828/489:delGen
 =4 into _9nv2 [maxNumSegments=1]
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1865)
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1801)
 at 
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:563)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1648)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1625)
 at 
 org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1963)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 

Re: Review Request 23371: SOLR-5656: Add autoAddReplicas feature for shared file systems.

2014-08-01 Thread Mark Miller

---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/23371/
---

(Updated Aug. 1, 2014, 4:07 p.m.)


Review request for lucene.


Changes
---

New patch based on feedback.


Bugs: SOLR-5656
https://issues.apache.org/jira/browse/SOLR-5656


Repository: lucene


Description
---

First svn patch for SOLR-5656: Add autoAddReplicas feature for shared file 
systems.


Diffs (updated)
-

  trunk/solr/cloud-dev/control.sh 1614918 
  trunk/solr/cloud-dev/functions.sh 1614918 
  trunk/solr/cloud-dev/solrcloud-start-existing.sh 1614918 
  trunk/solr/cloud-dev/solrcloud-start.sh 1614918 
  trunk/solr/cloud-dev/stop.sh 1614918 
  
trunk/solr/contrib/dataimporthandler/src/test/org/apache/solr/handler/dataimport/TestSolrEntityProcessorEndToEnd.java
 1614918 
  trunk/solr/core/src/java/org/apache/solr/cloud/Assign.java 1614918 
  trunk/solr/core/src/java/org/apache/solr/cloud/CloudUtil.java PRE-CREATION 
  trunk/solr/core/src/java/org/apache/solr/cloud/ElectionContext.java 1614918 
  trunk/solr/core/src/java/org/apache/solr/cloud/Overseer.java 1614918 
  
trunk/solr/core/src/java/org/apache/solr/cloud/OverseerAutoReplicaFailoverThread.java
 PRE-CREATION 
  
trunk/solr/core/src/java/org/apache/solr/cloud/OverseerCollectionProcessor.java 
1614918 
  trunk/solr/core/src/java/org/apache/solr/cloud/ZkController.java 1614918 
  trunk/solr/core/src/java/org/apache/solr/core/ConfigSolr.java 1614918 
  trunk/solr/core/src/java/org/apache/solr/core/ConfigSolrXml.java 1614918 
  trunk/solr/core/src/java/org/apache/solr/core/ConfigSolrXmlOld.java 1614918 
  trunk/solr/core/src/java/org/apache/solr/core/CoreContainer.java 1614918 
  trunk/solr/core/src/java/org/apache/solr/core/DirectoryFactory.java 1614918 
  trunk/solr/core/src/java/org/apache/solr/core/HdfsDirectoryFactory.java 
1614918 
  
trunk/solr/core/src/java/org/apache/solr/handler/admin/CollectionsHandler.java 
1614918 
  trunk/solr/core/src/java/org/apache/solr/handler/admin/CoreAdminHandler.java 
1614918 
  trunk/solr/core/src/java/org/apache/solr/request/LocalSolrQueryRequest.java 
1614918 
  trunk/solr/core/src/java/org/apache/solr/update/HdfsUpdateLog.java 1614918 
  trunk/solr/core/src/java/org/apache/solr/update/UpdateShardHandler.java 
1614918 
  trunk/solr/core/src/test-files/log4j.properties 1614918 
  trunk/solr/core/src/test-files/solr/solr-no-core.xml 1614918 
  trunk/solr/core/src/test/org/apache/solr/cloud/BasicDistributedZkTest.java 
1614918 
  trunk/solr/core/src/test/org/apache/solr/cloud/ChaosMonkeyShardSplitTest.java 
1614918 
  trunk/solr/core/src/test/org/apache/solr/cloud/ClusterStateUpdateTest.java 
1614918 
  
trunk/solr/core/src/test/org/apache/solr/cloud/CollectionsAPIDistributedZkTest.java
 1614918 
  trunk/solr/core/src/test/org/apache/solr/cloud/CustomCollectionTest.java 
1614918 
  trunk/solr/core/src/test/org/apache/solr/cloud/DeleteReplicaTest.java 1614918 
  trunk/solr/core/src/test/org/apache/solr/cloud/MigrateRouteKeyTest.java 
1614918 
  
trunk/solr/core/src/test/org/apache/solr/cloud/OverseerCollectionProcessorTest.java
 1614918 
  trunk/solr/core/src/test/org/apache/solr/cloud/OverseerRolesTest.java 1614918 
  trunk/solr/core/src/test/org/apache/solr/cloud/OverseerTest.java 1614918 
  trunk/solr/core/src/test/org/apache/solr/cloud/ShardRoutingCustomTest.java 
1614918 
  trunk/solr/core/src/test/org/apache/solr/cloud/ShardSplitTest.java 1614918 
  
trunk/solr/core/src/test/org/apache/solr/cloud/SharedFSAutoReplicaFailoverTest.java
 PRE-CREATION 
  
trunk/solr/core/src/test/org/apache/solr/cloud/SharedFSAutoReplicaFailoverUtilsTest.java
 PRE-CREATION 
  trunk/solr/core/src/test/org/apache/solr/cloud/ZkControllerTest.java 1614918 
  trunk/solr/core/src/test/org/apache/solr/cloud/hdfs/HdfsTestUtil.java 1614918 
  trunk/solr/core/src/test/org/apache/solr/handler/TestReplicationHandler.java 
1614918 
  
trunk/solr/core/src/test/org/apache/solr/handler/TestReplicationHandlerBackup.java
 1614918 
  trunk/solr/core/src/test/org/apache/solr/search/TestRecoveryHdfs.java 1614918 
  trunk/solr/core/src/test/org/apache/solr/util/MockConfigSolr.java 
PRE-CREATION 
  trunk/solr/example/solr/solr.xml 1614918 
  
trunk/solr/solrj/src/java/org/apache/solr/client/solrj/request/CollectionAdminRequest.java
 1614918 
  trunk/solr/solrj/src/java/org/apache/solr/common/cloud/ClosableThread.java 
1614918 
  trunk/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterState.java 
1614918 
  trunk/solr/solrj/src/java/org/apache/solr/common/cloud/ClusterStateUtil.java 
PRE-CREATION 
  trunk/solr/solrj/src/java/org/apache/solr/common/cloud/DocCollection.java 
1614918 
  trunk/solr/solrj/src/java/org/apache/solr/common/cloud/SolrZkClient.java 
1614918 
  trunk/solr/solrj/src/java/org/apache/solr/common/cloud/ZkStateReader.java 
1614918 
  

[jira] [Commented] (SOLR-6306) Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)

2014-08-01 Thread Brett Hoerner (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082414#comment-14082414
 ] 

Brett Hoerner commented on SOLR-6306:
-

I just want to note that this happened on two different collections (same 
SolrCloud cluster) on different machines. In both cases I had existing shards 
from 4.9 and I tried to index into them after running 4.10. Our data is sharded 
by time and rolls forward and new shards created after 4.10 that don't have 
any pre-4.10 data are doing fine.

I believe I can repo this by taking any of my old 4.9 shards and indexing a lot 
of data into them under 4.10... let me know if you need anything from me.

 Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)
 -

 Key: SOLR-6306
 URL: https://issues.apache.org/jira/browse/SOLR-6306
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Brett Hoerner

 I have a SolrCloud cluster that has been running 4.9, I tried a 4.10 build as 
 a test and our indexing slowed to a crawl. I noticed the number of segments 
 (typically under 25) was up to 75 and climbing. In the logs it seems like 
 merges were failing with the following.
 Happy to provide any other info as needed.
 {code}
 15:06:24.624 [qtp1728790703-1634] ERROR o.a.solr.servlet.SolrDispatchFilter - 
 null:java.io.IOException: background merge hit exception: 
 _9n6s(4.9):C14802716/827586:delGen=97 _9nbh(4.9):C2903594/263527:delGen=100 
 _9no8(4.9):C2190621/20968:delGen=58 _9nak(4.9):C712244/78919:delGen=100 
 _9nfr(4.9):C686466/84576:delGen=97 
 _9ngy(4.9):C679031/90147:delGen=96 _9ncx(4.9):C641773/81866:delGen=99 
 _9nht(4.9):C415750/68337:delGen=94 _9mvj(4.9):C338961/39283:delGen=110 
 _9nje(4.9):C215123/41594:delGen=87 _9nmn(4.9):C156084/40673:delGen=69 
 _9nsk(4.9):C60958/7357:delGen=21 _9nka(4.9):C69625/22375:delGen=83 
 _9nrl(4.9):C27522/4326:delGen=31 _9nqr(4.
 9):C27216/7540:delGen=39 _9nqm(4.9):C24252/5597:delGen=40 
 _9nto(4.9):C10324/1882:delGen=10 _9ntx(4.9):C9581/1218:delGen=8 
 _9nts(4.9):C9731/1619:delGen=9 _9nv1(4.10):C3425 
 _9ntz(4.9):C1437/919:delGen=8 _9nu7(4.10):C1130/697:delGen=5 
 _9nuw(4.10):C611/218:delGen=2 _9nun(4.10):C625/308:delGen=3 
 _9nug(4.10):C828/489:delGen
 =4 into _9nv2 [maxNumSegments=1]
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1865)
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1801)
 at 
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:563)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1648)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1625)
 at 
 org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1963)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 

[jira] [Commented] (SOLR-6306) Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)

2014-08-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082425#comment-14082425
 ] 

Robert Muir commented on SOLR-6306:
---

Brett, all the problematic segments have this:

lucene.version=4.9-SNAPSHOT Unversioned directory - brett - 2014-06-16 13:17:20

It looks like these were created with an unreleased version of 4.9? The index 
format is not finalized until the final release, so that would explain why 4.10 
cannot read it: we can only support backwards compatibility for release 
versions.

If you can tell me what SVN revision you used to create that snapshot, I can 
tell you with 100% confidence, but this looks like the issue.

 Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)
 -

 Key: SOLR-6306
 URL: https://issues.apache.org/jira/browse/SOLR-6306
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Brett Hoerner

 I have a SolrCloud cluster that has been running 4.9, I tried a 4.10 build as 
 a test and our indexing slowed to a crawl. I noticed the number of segments 
 (typically under 25) was up to 75 and climbing. In the logs it seems like 
 merges were failing with the following.
 Happy to provide any other info as needed.
 {code}
 15:06:24.624 [qtp1728790703-1634] ERROR o.a.solr.servlet.SolrDispatchFilter - 
 null:java.io.IOException: background merge hit exception: 
 _9n6s(4.9):C14802716/827586:delGen=97 _9nbh(4.9):C2903594/263527:delGen=100 
 _9no8(4.9):C2190621/20968:delGen=58 _9nak(4.9):C712244/78919:delGen=100 
 _9nfr(4.9):C686466/84576:delGen=97 
 _9ngy(4.9):C679031/90147:delGen=96 _9ncx(4.9):C641773/81866:delGen=99 
 _9nht(4.9):C415750/68337:delGen=94 _9mvj(4.9):C338961/39283:delGen=110 
 _9nje(4.9):C215123/41594:delGen=87 _9nmn(4.9):C156084/40673:delGen=69 
 _9nsk(4.9):C60958/7357:delGen=21 _9nka(4.9):C69625/22375:delGen=83 
 _9nrl(4.9):C27522/4326:delGen=31 _9nqr(4.
 9):C27216/7540:delGen=39 _9nqm(4.9):C24252/5597:delGen=40 
 _9nto(4.9):C10324/1882:delGen=10 _9ntx(4.9):C9581/1218:delGen=8 
 _9nts(4.9):C9731/1619:delGen=9 _9nv1(4.10):C3425 
 _9ntz(4.9):C1437/919:delGen=8 _9nu7(4.10):C1130/697:delGen=5 
 _9nuw(4.10):C611/218:delGen=2 _9nun(4.10):C625/308:delGen=3 
 _9nug(4.10):C828/489:delGen
 =4 into _9nv2 [maxNumSegments=1]
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1865)
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1801)
 at 
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:563)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1648)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1625)
 at 
 org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1963)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 

[jira] [Commented] (SOLR-6306) Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)

2014-08-01 Thread Brett Hoerner (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082447#comment-14082447
 ] 

Brett Hoerner commented on SOLR-6306:
-

Robert, I was afraid of that, but it's reasonable. :)

I'm not sure on exact svn rev, but it was built with branch_4x as of that day 
(6-16), which to me means something must have fixed the issue in here (6-16 
up to 4.9 release, from git):

{code}
* 36c54b1 - (tag: lucene_solr_4_9_0) tag 4.9 Robert Muir (5 weeks ago)
* bfcb37f - SOLR-6182: correctly cast managedData as a ListObject when 
loading stored RestManager data; solution verified with manual testing only as 
the unit tests use in-memory storage so will need to re-work the backing store 
to test this behavior in the unit test; backport to 4.9 branch Timothy Potter 
(6 weeks a
* c733b8e - LUCENE-5767: remove bogus cast (in this case can exceed 
Integer.MAX_VALUE, and the underlying delta reader takes long anyway) Robert 
Muir (6 weeks ago)
* f7c3fd8 - fix off-by-one in checkBufferSize, it must be = 8 Robert Muir (6 
weeks ago)
* fb7c50f - svn:eol-style Robert Muir (6 weeks ago)
* 22fb394 - LUCENE-5777: fix double escaping of dash in hunspell conditions 
Robert Muir (6 weeks ago)
* f19581a - LUCENE-5773: Fix ram usage estimation on PositiveIntOutputs. Adrien 
Grand (6 weeks ago)
* c88eb15 - SOLR-6161: Walk the entire cause chain looking for an Error shalin 
Shekhar Mangar (6 weeks ago)
* 1143ff5 - LUCENE-5773: Test SegmentReader.ramBytesUsed. Adrien Grand (6 weeks 
ago)
* 7d625d2 - SOLR-6128: Removed deprecated analysis factories and fieldTypes 
from the example schema.xml (merge r1603644 via r1603649) Chris M. Hostetter (6 
weeks ago)
* fc53ee8 - SOLR-6064: Return DebugComponent track output as JSON object Alan 
Woodward (6 weeks ago)
* 71fae50 - SOLR-6125: Allow SolrIndexWriter to close without waiting for 
merges Alan Woodward (6 weeks ago)
* f2b8c78 - LUCENE-5775: Deprecate JaspellLookup; fix its ramBytesUsed to not 
StackOverflow Michael McCandless (6 weeks ago)
* 5894d26 - LUCENE-5772: implement getSortedNumericDocValues in 
SortingAtomicReader Shai Erera (6 weeks ago)
* 2d0042b - SOLR-6160: bugfix when facet query or range with group facets and 
distributed David Wayne Smiley (6 weeks ago)
* dfedf04 - SOLR-6164: Copy Fields Schema additions are not distributed to 
other nodes (merged trunk r1603300 and r1603301) Steven Rowe (6 weeks ago)
* 2d811ac - SOLR-6175: Merged test fixes from branch_4x shalin Shekhar Mangar 
(6 weeks ago)
* 8c6fc93 - LUCENE-5761: upgrade note for solr (merge r1603227) Chris M. 
Hostetter (6 weeks ago)
* 3bee1c3 - branch for 4.9 Robert Muir (6 weeks ago)
* 8d9a5f5 - SOLR-6129: DateFormatTransformer doesn't resolve dateTimeFormat 
shalin Shekhar Mangar (6 weeks ago)
* aff7dc9 - SOLR-6175: DebugComponent throws NPE on shard exceptions when using 
shards.tolerant shalin Shekhar Mangar (6 weeks ago)
* 973ed13 - LUCENE-5769: SingletonSortedSetDocValues now supports random access 
ordinals Robert Muir (7 weeks ago)
* 2fa15c3 - Remove javadoc @see tag. I can't manage to make it work with 
precommit. Adrien Grand (7 weeks ago)
* 95c697a - LUCENE-5768: hunspell condition checks with character classes were 
buggy Robert Muir (7 weeks ago)
* 3acb593 - LUCENE-5767: OrdinalMap optimizations. Adrien Grand (7 weeks ago)
* 059a7b5 - SOLR-6015: Backport fixes from trunk to branch_4x. Timothy Potter 
(7 weeks ago)
* de203d5 - LUCENE-5765: Add tests to OrdinalMap.ramBytesUsed. Adrien Grand (7 
weeks ago)
* 5fc0871 - LUCENE-5764: Add tests to DocIdSet.ramBytesUsed. Adrien Grand (7 
weeks ago)
* 9e0e17e - LUCENE-5759: Add PackedInts.unsignedBitsRequired. Adrien Grand (7 
weeks ago)
* 51924f0 - LUCENE-5761: Remove DiskDocValuesFormat Robert Muir (7 weeks ago)
* 171ae5a - SOLR-6151: Intermittent TestReplicationHandlerBackup failures. 
Dawid Weiss (7 weeks ago)
* 9ad403f - LUCENE-5762: Disable old codecs as much as possible Robert Muir (7 
weeks ago)
{code}

 Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)
 -

 Key: SOLR-6306
 URL: https://issues.apache.org/jira/browse/SOLR-6306
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Brett Hoerner

 I have a SolrCloud cluster that has been running 4.9, I tried a 4.10 build as 
 a test and our indexing slowed to a crawl. I noticed the number of segments 
 (typically under 25) was up to 75 and climbing. In the logs it seems like 
 merges were failing with the following.
 Happy to provide any other info as needed.
 {code}
 15:06:24.624 [qtp1728790703-1634] ERROR o.a.solr.servlet.SolrDispatchFilter - 
 null:java.io.IOException: background merge hit exception: 
 _9n6s(4.9):C14802716/827586:delGen=97 _9nbh(4.9):C2903594/263527:delGen=100 
 

[jira] [Commented] (SOLR-6308) Remove filtered documents from elevated set

2014-08-01 Thread David Boychuck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082455#comment-14082455
 ] 

David Boychuck commented on SOLR-6308:
--

Ahh Thanks Joel. I'll close this as a duplicate.

 Remove filtered documents from elevated set
 ---

 Key: SOLR-6308
 URL: https://issues.apache.org/jira/browse/SOLR-6308
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.9
Reporter: David Boychuck
 Fix For: 4.10

   Original Estimate: 8h
  Remaining Estimate: 8h

 I would like to add a parameter to the Query Elevation Component. Something 
 like showFiltered=false where any results that have been filtered from the 
 result set with the fq parameter will no longer be elevated.
 as an example if I had two documents returned in a query
 {code}
 id=A
 field_1=foo
 id=B
 field_1=bar
 {code}
 I would want the following query to yield the shown results
 {code}
 /solr/elevate?q=*fq=field_1:barelevate=trueelevateIds=A
 id=B
 field_1=bar
 {code}
 id A is removed from the results because it is not contained in the filtered 
 results even though it is elevated. It would be nice if we could pass an 
 optional parameter like showFiltered=false where any results that have been 
 filtered from the result set with the fq parameter will no longer be 
 elevated. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-6308) Remove filtered documents from elevated set

2014-08-01 Thread David Boychuck (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Boychuck closed SOLR-6308.


Resolution: Duplicate

 Remove filtered documents from elevated set
 ---

 Key: SOLR-6308
 URL: https://issues.apache.org/jira/browse/SOLR-6308
 Project: Solr
  Issue Type: Improvement
Affects Versions: 4.9
Reporter: David Boychuck
 Fix For: 4.10

   Original Estimate: 8h
  Remaining Estimate: 8h

 I would like to add a parameter to the Query Elevation Component. Something 
 like showFiltered=false where any results that have been filtered from the 
 result set with the fq parameter will no longer be elevated.
 as an example if I had two documents returned in a query
 {code}
 id=A
 field_1=foo
 id=B
 field_1=bar
 {code}
 I would want the following query to yield the shown results
 {code}
 /solr/elevate?q=*fq=field_1:barelevate=trueelevateIds=A
 id=B
 field_1=bar
 {code}
 id A is removed from the results because it is not contained in the filtered 
 results even though it is elevated. It would be nice if we could pass an 
 optional parameter like showFiltered=false where any results that have been 
 filtered from the result set with the fq parameter will no longer be 
 elevated. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_11) - Build # 10940 - Failure!

2014-08-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/10940/
Java: 32bit/jdk1.8.0_11 -server -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.RecoveryZkTest.testDistribSearch

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([EC2D76E30C7C0AC3:6DCBF8FB7B236AFF]:0)
at sun.nio.ch.Net.bind0(Native Method)
at sun.nio.ch.Net.bind(Net.java:414)
at sun.nio.ch.Net.bind(Net.java:406)
at 
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.open(SelectChannelConnector.java:187)
at 
org.eclipse.jetty.server.AbstractConnector.doStart(AbstractConnector.java:316)
at 
org.eclipse.jetty.server.nio.SelectChannelConnector.doStart(SelectChannelConnector.java:265)
at 
org.eclipse.jetty.server.ssl.SslSelectChannelConnector.doStart(SslSelectChannelConnector.java:631)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at org.eclipse.jetty.server.Server.doStart(Server.java:291)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:64)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:418)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:391)
at org.apache.solr.cloud.RecoveryZkTest.doTest(RecoveryZkTest.java:93)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:865)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 

[jira] [Commented] (SOLR-6306) Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)

2014-08-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082481#comment-14082481
 ] 

Robert Muir commented on SOLR-6306:
---

Well it just means the format changed, several times during development. 
Sometimes its just little changes like LUCENE-5750.

In general once we have an official release, the format for that version is 
frozen and then we add backwards compatibility indexes and test for it. But 
anything in between releases probably cannot be upgraded.

 Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)
 -

 Key: SOLR-6306
 URL: https://issues.apache.org/jira/browse/SOLR-6306
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Brett Hoerner

 I have a SolrCloud cluster that has been running 4.9, I tried a 4.10 build as 
 a test and our indexing slowed to a crawl. I noticed the number of segments 
 (typically under 25) was up to 75 and climbing. In the logs it seems like 
 merges were failing with the following.
 Happy to provide any other info as needed.
 {code}
 15:06:24.624 [qtp1728790703-1634] ERROR o.a.solr.servlet.SolrDispatchFilter - 
 null:java.io.IOException: background merge hit exception: 
 _9n6s(4.9):C14802716/827586:delGen=97 _9nbh(4.9):C2903594/263527:delGen=100 
 _9no8(4.9):C2190621/20968:delGen=58 _9nak(4.9):C712244/78919:delGen=100 
 _9nfr(4.9):C686466/84576:delGen=97 
 _9ngy(4.9):C679031/90147:delGen=96 _9ncx(4.9):C641773/81866:delGen=99 
 _9nht(4.9):C415750/68337:delGen=94 _9mvj(4.9):C338961/39283:delGen=110 
 _9nje(4.9):C215123/41594:delGen=87 _9nmn(4.9):C156084/40673:delGen=69 
 _9nsk(4.9):C60958/7357:delGen=21 _9nka(4.9):C69625/22375:delGen=83 
 _9nrl(4.9):C27522/4326:delGen=31 _9nqr(4.
 9):C27216/7540:delGen=39 _9nqm(4.9):C24252/5597:delGen=40 
 _9nto(4.9):C10324/1882:delGen=10 _9ntx(4.9):C9581/1218:delGen=8 
 _9nts(4.9):C9731/1619:delGen=9 _9nv1(4.10):C3425 
 _9ntz(4.9):C1437/919:delGen=8 _9nu7(4.10):C1130/697:delGen=5 
 _9nuw(4.10):C611/218:delGen=2 _9nun(4.10):C625/308:delGen=3 
 _9nug(4.10):C828/489:delGen
 =4 into _9nv2 [maxNumSegments=1]
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1865)
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1801)
 at 
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:563)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1648)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1625)
 at 
 org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1963)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at 

[jira] [Closed] (SOLR-6306) Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)

2014-08-01 Thread Brett Hoerner (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brett Hoerner closed SOLR-6306.
---

Resolution: Invalid

 Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)
 -

 Key: SOLR-6306
 URL: https://issues.apache.org/jira/browse/SOLR-6306
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Brett Hoerner

 I have a SolrCloud cluster that has been running 4.9, I tried a 4.10 build as 
 a test and our indexing slowed to a crawl. I noticed the number of segments 
 (typically under 25) was up to 75 and climbing. In the logs it seems like 
 merges were failing with the following.
 Happy to provide any other info as needed.
 {code}
 15:06:24.624 [qtp1728790703-1634] ERROR o.a.solr.servlet.SolrDispatchFilter - 
 null:java.io.IOException: background merge hit exception: 
 _9n6s(4.9):C14802716/827586:delGen=97 _9nbh(4.9):C2903594/263527:delGen=100 
 _9no8(4.9):C2190621/20968:delGen=58 _9nak(4.9):C712244/78919:delGen=100 
 _9nfr(4.9):C686466/84576:delGen=97 
 _9ngy(4.9):C679031/90147:delGen=96 _9ncx(4.9):C641773/81866:delGen=99 
 _9nht(4.9):C415750/68337:delGen=94 _9mvj(4.9):C338961/39283:delGen=110 
 _9nje(4.9):C215123/41594:delGen=87 _9nmn(4.9):C156084/40673:delGen=69 
 _9nsk(4.9):C60958/7357:delGen=21 _9nka(4.9):C69625/22375:delGen=83 
 _9nrl(4.9):C27522/4326:delGen=31 _9nqr(4.
 9):C27216/7540:delGen=39 _9nqm(4.9):C24252/5597:delGen=40 
 _9nto(4.9):C10324/1882:delGen=10 _9ntx(4.9):C9581/1218:delGen=8 
 _9nts(4.9):C9731/1619:delGen=9 _9nv1(4.10):C3425 
 _9ntz(4.9):C1437/919:delGen=8 _9nu7(4.10):C1130/697:delGen=5 
 _9nuw(4.10):C611/218:delGen=2 _9nun(4.10):C625/308:delGen=3 
 _9nug(4.10):C828/489:delGen
 =4 into _9nv2 [maxNumSegments=1]
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1865)
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1801)
 at 
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:563)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1648)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1625)
 at 
 org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1963)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at org.eclipse.jetty.server.Server.handle(Server.java:368)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
 

[jira] [Commented] (SOLR-6306) Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)

2014-08-01 Thread Brett Hoerner (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082484#comment-14082484
 ] 

Brett Hoerner commented on SOLR-6306:
-

Makes sense, thanks for your help!

 Problem using Solr 4.9 index with 4.10 build (merge failures with DocValues?)
 -

 Key: SOLR-6306
 URL: https://issues.apache.org/jira/browse/SOLR-6306
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.10
Reporter: Brett Hoerner

 I have a SolrCloud cluster that has been running 4.9, I tried a 4.10 build as 
 a test and our indexing slowed to a crawl. I noticed the number of segments 
 (typically under 25) was up to 75 and climbing. In the logs it seems like 
 merges were failing with the following.
 Happy to provide any other info as needed.
 {code}
 15:06:24.624 [qtp1728790703-1634] ERROR o.a.solr.servlet.SolrDispatchFilter - 
 null:java.io.IOException: background merge hit exception: 
 _9n6s(4.9):C14802716/827586:delGen=97 _9nbh(4.9):C2903594/263527:delGen=100 
 _9no8(4.9):C2190621/20968:delGen=58 _9nak(4.9):C712244/78919:delGen=100 
 _9nfr(4.9):C686466/84576:delGen=97 
 _9ngy(4.9):C679031/90147:delGen=96 _9ncx(4.9):C641773/81866:delGen=99 
 _9nht(4.9):C415750/68337:delGen=94 _9mvj(4.9):C338961/39283:delGen=110 
 _9nje(4.9):C215123/41594:delGen=87 _9nmn(4.9):C156084/40673:delGen=69 
 _9nsk(4.9):C60958/7357:delGen=21 _9nka(4.9):C69625/22375:delGen=83 
 _9nrl(4.9):C27522/4326:delGen=31 _9nqr(4.
 9):C27216/7540:delGen=39 _9nqm(4.9):C24252/5597:delGen=40 
 _9nto(4.9):C10324/1882:delGen=10 _9ntx(4.9):C9581/1218:delGen=8 
 _9nts(4.9):C9731/1619:delGen=9 _9nv1(4.10):C3425 
 _9ntz(4.9):C1437/919:delGen=8 _9nu7(4.10):C1130/697:delGen=5 
 _9nuw(4.10):C611/218:delGen=2 _9nun(4.10):C625/308:delGen=3 
 _9nug(4.10):C828/489:delGen
 =4 into _9nv2 [maxNumSegments=1]
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1865)
 at 
 org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1801)
 at 
 org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:563)
 at 
 org.apache.solr.update.processor.RunUpdateProcessor.processCommit(RunUpdateProcessorFactory.java:95)
 at 
 org.apache.solr.update.processor.UpdateRequestProcessor.processCommit(UpdateRequestProcessor.java:64)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalCommit(DistributedUpdateProcessor.java:1648)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor.processCommit(DistributedUpdateProcessor.java:1625)
 at 
 org.apache.solr.handler.RequestHandlerUtils.handleCommit(RequestHandlerUtils.java:69)
 at 
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
 at 
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1963)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
 at 
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
 at 
 org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
 at 
 org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
 at 
 org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
 at 
 org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
 at 
 org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
 at 
 org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
 at 
 org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
 at 
 org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
 at 
 org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
 at org.eclipse.jetty.server.Server.handle(Server.java:368)
 at 
 org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
 at 
 org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
 at 
 

[jira] [Commented] (SOLR-6103) Add DateRangeField

2014-08-01 Thread Adrien Brault (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082535#comment-14082535
 ] 

Adrien Brault commented on SOLR-6103:
-

[~dsmiley] Are you still planning to get that change into 4.x ?

It would allow us to solve a lot of problems that are way too hard to implement 
without the DateRangeField.

 Add DateRangeField
 --

 Key: SOLR-6103
 URL: https://issues.apache.org/jira/browse/SOLR-6103
 Project: Solr
  Issue Type: New Feature
  Components: spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.0

 Attachments: SOLR-6103.patch


 LUCENE-5648 introduced a date range index  search capability in the spatial 
 module. This issue is for a corresponding Solr FieldType to be named 
 DateRangeField. LUCENE-5648 includes a parseCalendar(String) method that 
 parses a superset of Solr's strict date format.  It also parses partial dates 
 (e.g.: 2014-10  has month specificity), and the trailing 'Z' is optional, and 
 a leading +/- may be present (minus indicates BC era), and * means 
 all-time.  The proposed field type would use it to parse a string and also 
 both ends of a range query, but furthermore it will also allow an arbitrary 
 range query of the form {{calspec TO calspec}} such as:
 {noformat}2000 TO 2014-05-21T10{noformat}
 Which parses as the year 2000 thru 2014 May 21st 10am (GMT). 
 I suggest this syntax because it is aligned with Lucene's range query syntax. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6103) Add DateRangeField

2014-08-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082557#comment-14082557
 ] 

David Smiley commented on SOLR-6103:


I am; I'm waiting to port some related API improvements once I figure out one 
last issue.  Have you tried the feature on 5x?  It's very easy to.

 Add DateRangeField
 --

 Key: SOLR-6103
 URL: https://issues.apache.org/jira/browse/SOLR-6103
 Project: Solr
  Issue Type: New Feature
  Components: spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.0

 Attachments: SOLR-6103.patch


 LUCENE-5648 introduced a date range index  search capability in the spatial 
 module. This issue is for a corresponding Solr FieldType to be named 
 DateRangeField. LUCENE-5648 includes a parseCalendar(String) method that 
 parses a superset of Solr's strict date format.  It also parses partial dates 
 (e.g.: 2014-10  has month specificity), and the trailing 'Z' is optional, and 
 a leading +/- may be present (minus indicates BC era), and * means 
 all-time.  The proposed field type would use it to parse a string and also 
 both ends of a range query, but furthermore it will also allow an arbitrary 
 range query of the form {{calspec TO calspec}} such as:
 {noformat}2000 TO 2014-05-21T10{noformat}
 Which parses as the year 2000 thru 2014 May 21st 10am (GMT). 
 I suggest this syntax because it is aligned with Lucene's range query syntax. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5831) Scale score PostFilter

2014-08-01 Thread Peter Keegan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Keegan updated SOLR-5831:
---

Attachment: scalescoreplugin.zip

I reimplemented this PostFilter as a RankQuery in 4.9. Although it still has 
some of the complexity of the PostFilter, it no longer has to manage its own PQ 
and since it's a plugin, there are no changes to Solr core. I haven't figured 
out how to implement the 'explain' method yet, since most of the state is in 
the collector. Also, where does one contribute external plugins?

Peter

 Scale score PostFilter
 --

 Key: SOLR-5831
 URL: https://issues.apache.org/jira/browse/SOLR-5831
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.7
Reporter: Peter Keegan
Assignee: Joel Bernstein
Priority: Minor
 Fix For: 4.9

 Attachments: SOLR-5831.patch, SOLR-5831.patch, SOLR-5831.patch, 
 SOLR-5831.patch, SOLR-5831.patch, TestScaleScoreQParserPlugin.patch, 
 scalescoreplugin.zip


 The ScaleScoreQParserPlugin is a PostFilter that performs score scaling.
 This is an alternative to using a function query wrapping a scale() wrapping 
 a query(). For example:
 select?qq={!edismax v='news' qf='title^2 
 body'}scaledQ=scale(product(query($qq),1),0,1)q={!func}sum(product(0.75,$scaledQ),product(0.25,field(myfield)))fq={!query
  v=$qq}
 The problem with this query is that it has to scale every hit. Usually, only 
 the returned hits need to be scaled,
 but there may be use cases where the number of hits to be scaled is greater 
 than the returned hit count,
 but less than or equal to the total hit count.
 Sample syntax:
 fq={!scalescore+l=0.0 u=1.0 maxscalehits=1 
 func=sum(product(sscore(),0.75),product(field(myfield),0.25))}
 l=0.0 u=1.0   //Scale scores to values between 0-1, inclusive 
 maxscalehits=1//The maximum number of result scores to scale (-1 = 
 all hits, 0 = results 'page' size)
 func=...  //Apply the composite function to each hit. The 
 scaled score value is accessed by the 'score()' value source
 All parameters are optional. The defaults are:
 l=0.0 u=1.0
 maxscalehits=0 (result window size)
 func=(null)
  
 Note: this patch is not complete, as it contains no test cases and may not 
 conform 
 to all the guidelines in http://wiki.apache.org/solr/HowToContribute. 
  
 I would appreciate any feedback on the usability and implementation.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2014-08-01 Thread Steve Molloy (JIRA)
Steve Molloy created SOLR-6311:
--

 Summary: SearchHandler should use path when no qt or shard.qt 
parameter is specified
 Key: SOLR-6311
 URL: https://issues.apache.org/jira/browse/SOLR-6311
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Molloy


When performing distributed searches, you have to specify shards.qt unless 
you're on the default /select path for your handler. As this is configurable, 
even the default search handler could be on another path. The shard requests 
should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6311) SearchHandler should use path when no qt or shard.qt parameter is specified

2014-08-01 Thread Steve Molloy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Molloy updated SOLR-6311:
---

Attachment: SOLR-6311.patch

This patch will use shards.qt if specified, default to qt if not, then default 
to path if both were omitted.

 SearchHandler should use path when no qt or shard.qt parameter is specified
 ---

 Key: SOLR-6311
 URL: https://issues.apache.org/jira/browse/SOLR-6311
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Molloy
 Attachments: SOLR-6311.patch


 When performing distributed searches, you have to specify shards.qt unless 
 you're on the default /select path for your handler. As this is configurable, 
 even the default search handler could be on another path. The shard requests 
 should thus default to the path if no shards.qt was specified.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6312) CloudSolrServer doesn't honor updatesToLeaders constructor argument

2014-08-01 Thread Steve Davids (JIRA)
Steve Davids created SOLR-6312:
--

 Summary: CloudSolrServer doesn't honor updatesToLeaders 
constructor argument
 Key: SOLR-6312
 URL: https://issues.apache.org/jira/browse/SOLR-6312
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Steve Davids
 Fix For: 4.10


The CloudSolrServer doesn't use the updatesToLeaders property - all SolrJ 
requests are being sent to the shard leaders.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5867) Add BooleanSimilarity

2014-08-01 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5867?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082778#comment-14082778
 ] 

Jack Krupansky commented on LUCENE-5867:


Would this be expected to result in any dramatic improvement in indexing or 
query performance, or a dramatic reduction in index size?


 Add BooleanSimilarity
 -

 Key: LUCENE-5867
 URL: https://issues.apache.org/jira/browse/LUCENE-5867
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Robert Muir
 Attachments: LUCENE-5867.patch


 This can be used when the user doesn't want tf/idf scoring for some reason. 
 The idea is that the score is just query_time_boost * index_time_boost, no 
 queryNorm/IDF/TF/lengthNorm...



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5156) CompressingTermVectors termsEnum should probably not support seek-by-ord

2014-08-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082898#comment-14082898
 ] 

David Smiley commented on LUCENE-5156:
--

I can understand why this change was done -- better to not support it than 
support something optional that should be implemented fast yet not do it fast.  
What if it were to be made fast, along with seekCeil() which is also 
implemented slowly right now too?  For example, say the first time either 
seekCeil is called or an ord method is called, then build up an array of term 
start positions by ordinal, which otherwise wouldn't be done.  Then you could 
do a binary search for seekCeil and a direct lookup for seekExact.  The 
lazy-created array could also then be shared across repeated invocations to get 
Terms for the current document.

Why bother, you might ask?  I'm working on a means of having the Terms from 
term vectors be directly searched against by the default highlighter instead of 
re-inverting to MemoryIndex.  I'll post a separate issue for that with code, of 
course, which works but isn't as efficient as it could be thanks to the O(N) 
of seekCeil on term vectors' Terms.

 CompressingTermVectors termsEnum should probably not support seek-by-ord
 

 Key: LUCENE-5156
 URL: https://issues.apache.org/jira/browse/LUCENE-5156
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 4.5, 5.0

 Attachments: LUCENE-5156.patch


 Just like term vectors before it, it has a O(n) seek-by-term. 
 But this one also advertises a seek-by-ord, only this is also O(n).
 This could cause e.g. checkindex to be very slow, because if termsenum 
 supports ord it does a bunch of seeking tests. (Another solution would be to 
 leave it, and add a boolean so checkindex never does seeking tests for term 
 vectors, only real fields).
 However, I think its also kinda a trap, in my opinion if seek-by-ord is 
 supported anywhere, you kinda expect it to be faster than linear time...?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6103) Add DateRangeField

2014-08-01 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082978#comment-14082978
 ] 

Jack Krupansky commented on SOLR-6103:
--

You might want to take a peek at the LucidWorks Search query parser support of 
date queries. It would be so nice to have comparable date support in Solr 
itself.

It includes the ability to auto-expand a simple partial date/time term into a 
full range, as well as using partial date/time in explicit range queries.

See:
http://docs.lucidworks.com/display/lweug/Date+Queries


 Add DateRangeField
 --

 Key: SOLR-6103
 URL: https://issues.apache.org/jira/browse/SOLR-6103
 Project: Solr
  Issue Type: New Feature
  Components: spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.0

 Attachments: SOLR-6103.patch


 LUCENE-5648 introduced a date range index  search capability in the spatial 
 module. This issue is for a corresponding Solr FieldType to be named 
 DateRangeField. LUCENE-5648 includes a parseCalendar(String) method that 
 parses a superset of Solr's strict date format.  It also parses partial dates 
 (e.g.: 2014-10  has month specificity), and the trailing 'Z' is optional, and 
 a leading +/- may be present (minus indicates BC era), and * means 
 all-time.  The proposed field type would use it to parse a string and also 
 both ends of a range query, but furthermore it will also allow an arbitrary 
 range query of the form {{calspec TO calspec}} such as:
 {noformat}2000 TO 2014-05-21T10{noformat}
 Which parses as the year 2000 thru 2014 May 21st 10am (GMT). 
 I suggest this syntax because it is aligned with Lucene's range query syntax. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6103) Add DateRangeField

2014-08-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14082991#comment-14082991
 ] 

David Smiley commented on SOLR-6103:


Thanks Jack; although I feel that's a separate issue.  But at least this field 
*does* let you specify convenient prefixes of Solr's date syntax, and then get 
the range equivalent for that unit of time.

 Add DateRangeField
 --

 Key: SOLR-6103
 URL: https://issues.apache.org/jira/browse/SOLR-6103
 Project: Solr
  Issue Type: New Feature
  Components: spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.0

 Attachments: SOLR-6103.patch


 LUCENE-5648 introduced a date range index  search capability in the spatial 
 module. This issue is for a corresponding Solr FieldType to be named 
 DateRangeField. LUCENE-5648 includes a parseCalendar(String) method that 
 parses a superset of Solr's strict date format.  It also parses partial dates 
 (e.g.: 2014-10  has month specificity), and the trailing 'Z' is optional, and 
 a leading +/- may be present (minus indicates BC era), and * means 
 all-time.  The proposed field type would use it to parse a string and also 
 both ends of a range query, but furthermore it will also allow an arbitrary 
 range query of the form {{calspec TO calspec}} such as:
 {noformat}2000 TO 2014-05-21T10{noformat}
 Which parses as the year 2000 thru 2014 May 21st 10am (GMT). 
 I suggest this syntax because it is aligned with Lucene's range query syntax. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2014-08-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083006#comment-14083006
 ] 

Mark Miller commented on SOLR-6305:
---

Example project link: https://github.com/markrmiller/solr-map-reduce-example

 Ability to set the replication factor for index files created by 
 HDFSDirectoryFactory
 -

 Key: SOLR-6305
 URL: https://issues.apache.org/jira/browse/SOLR-6305
 Project: Solr
  Issue Type: Improvement
  Components: hdfs
 Environment: hadoop-2.2.0
Reporter: Timothy Potter

 HdfsFileWriter doesn't allow us to create files in HDFS with a different 
 replication factor than the configured DFS default because it uses: 
 {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
 Since we have two forms of replication going on when using 
 HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
 factor for the Solr directories to a lower value than the default. I realize 
 this might reduce the chance of data locality but since Solr cores each have 
 their own path in HDFS, we should give operators the option to reduce it.
 My original thinking was to just use Hadoop setrep to customize the 
 replication factor, but that's a one-time shot and doesn't affect new files 
 created. For instance, I did:
 {{hadoop fs -setrep -R 1 solr49/coll1}}
 My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
 example
 Then added some more docs to the coll1 and did:
 {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
 3 -- should be 1
 So it looks like new files don't inherit the repfact from their parent 
 directory.
 Not sure if we need to go as far as allowing different replication factor per 
 collection but that should be considered if possible.
 I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
 this using the Configuration object but nothing jumped out at me ... and the 
 implementation for getServerDefaults(path) is just:
   public FsServerDefaults getServerDefaults(Path p) throws IOException {
 return getServerDefaults();
   }
 Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2014-08-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083004#comment-14083004
 ] 

Mark Miller commented on SOLR-6305:
---

We use it to do things like this, so I don't think it can be totally 
disregarded. Mainly we use it for configuring name node HA, not lowering 
replication though.

But, for example, in this example project I have on GitHub, I have to config it 
to dfs.replication=1 because there is only one data node and it would complain 
otherwise that it couldn't meet the replication factor. When I configured the 
client to dfs.replication=1, it no longer complains.

 Ability to set the replication factor for index files created by 
 HDFSDirectoryFactory
 -

 Key: SOLR-6305
 URL: https://issues.apache.org/jira/browse/SOLR-6305
 Project: Solr
  Issue Type: Improvement
  Components: hdfs
 Environment: hadoop-2.2.0
Reporter: Timothy Potter

 HdfsFileWriter doesn't allow us to create files in HDFS with a different 
 replication factor than the configured DFS default because it uses: 
 {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
 Since we have two forms of replication going on when using 
 HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
 factor for the Solr directories to a lower value than the default. I realize 
 this might reduce the chance of data locality but since Solr cores each have 
 their own path in HDFS, we should give operators the option to reduce it.
 My original thinking was to just use Hadoop setrep to customize the 
 replication factor, but that's a one-time shot and doesn't affect new files 
 created. For instance, I did:
 {{hadoop fs -setrep -R 1 solr49/coll1}}
 My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
 example
 Then added some more docs to the coll1 and did:
 {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
 3 -- should be 1
 So it looks like new files don't inherit the repfact from their parent 
 directory.
 Not sure if we need to go as far as allowing different replication factor per 
 collection but that should be considered if possible.
 I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
 this using the Configuration object but nothing jumped out at me ... and the 
 implementation for getServerDefaults(path) is just:
   public FsServerDefaults getServerDefaults(Path p) throws IOException {
 return getServerDefaults();
   }
 Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-4.x-Linux (64bit/jdk1.8.0_20-ea-b23) - Build # 10820 - Failure!

2014-08-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/10820/
Java: 64bit/jdk1.8.0_20-ea-b23 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
REGRESSION:  org.apache.solr.request.TestIntervalFaceting.testMultipleSegments

Error Message:
Expected multiple reader leaves. Found 1

Stack Trace:
java.lang.AssertionError: Expected multiple reader leaves. Found 1
at 
__randomizedtesting.SeedInfo.seed([F3343F2F5AAAC7DF:4F438A769D0D6B9A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.request.TestIntervalFaceting.assertMultipleReaders(TestIntervalFaceting.java:134)
at 
org.apache.solr.request.TestIntervalFaceting.testMultipleSegments(TestIntervalFaceting.java:103)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6305) Ability to set the replication factor for index files created by HDFSDirectoryFactory

2014-08-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083016#comment-14083016
 ] 

Mark Miller commented on SOLR-6305:
---

Now if I don't set the replication factor to 1 and leave it at it's default of 
3 and try to start Solr, I get errors like:

org.apache.hadoop.ipc.RemoteException(java.io.IOException): file 
/solr_test/collection1/core_node3/data/index/org.apache.solr.store.hdfs.HdfsDirectory@28b3d48e
 lockFactory=org.apache.solr.store.hdfs.hdfslockfact...@2dc3049a-write.lock on 
client 127.0.0.1.
Requested replication 3 exceeds maximum 1

 Ability to set the replication factor for index files created by 
 HDFSDirectoryFactory
 -

 Key: SOLR-6305
 URL: https://issues.apache.org/jira/browse/SOLR-6305
 Project: Solr
  Issue Type: Improvement
  Components: hdfs
 Environment: hadoop-2.2.0
Reporter: Timothy Potter

 HdfsFileWriter doesn't allow us to create files in HDFS with a different 
 replication factor than the configured DFS default because it uses: 
 {{FsServerDefaults fsDefaults = fileSystem.getServerDefaults(path);}}
 Since we have two forms of replication going on when using 
 HDFSDirectoryFactory, it would be nice to be able to set the HDFS replication 
 factor for the Solr directories to a lower value than the default. I realize 
 this might reduce the chance of data locality but since Solr cores each have 
 their own path in HDFS, we should give operators the option to reduce it.
 My original thinking was to just use Hadoop setrep to customize the 
 replication factor, but that's a one-time shot and doesn't affect new files 
 created. For instance, I did:
 {{hadoop fs -setrep -R 1 solr49/coll1}}
 My default dfs replication is set to 3 ^^ I'm setting it to 1 just as an 
 example
 Then added some more docs to the coll1 and did:
 {{hadoop fs -stat %r solr49/hdfs1/core_node1/data/index/segments_3}}
 3 -- should be 1
 So it looks like new files don't inherit the repfact from their parent 
 directory.
 Not sure if we need to go as far as allowing different replication factor per 
 collection but that should be considered if possible.
 I looked at the Hadoop 2.2.0 code to see if there was a way to work through 
 this using the Configuration object but nothing jumped out at me ... and the 
 implementation for getServerDefaults(path) is just:
   public FsServerDefaults getServerDefaults(Path p) throws IOException {
 return getServerDefaults();
   }
 Path is ignored ;-)



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5156) CompressingTermVectors termsEnum should probably not support seek-by-ord

2014-08-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083026#comment-14083026
 ] 

Robert Muir commented on LUCENE-5156:
-

Thats unrelated to term vectors. We shouldnt have such caching in the default 
codec, it can easily blow up on a large document.

 CompressingTermVectors termsEnum should probably not support seek-by-ord
 

 Key: LUCENE-5156
 URL: https://issues.apache.org/jira/browse/LUCENE-5156
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 4.5, 5.0

 Attachments: LUCENE-5156.patch


 Just like term vectors before it, it has a O(n) seek-by-term. 
 But this one also advertises a seek-by-ord, only this is also O(n).
 This could cause e.g. checkindex to be very slow, because if termsenum 
 supports ord it does a bunch of seeking tests. (Another solution would be to 
 leave it, and add a boolean so checkindex never does seeking tests for term 
 vectors, only real fields).
 However, I think its also kinda a trap, in my opinion if seek-by-ord is 
 supported anywhere, you kinda expect it to be faster than linear time...?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6313) Improve SolrCloud cloud-dev scripts.

2014-08-01 Thread Mark Miller (JIRA)
Mark Miller created SOLR-6313:
-

 Summary: Improve SolrCloud cloud-dev scripts.
 Key: SOLR-6313
 URL: https://issues.apache.org/jira/browse/SOLR-6313
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller


I've been improving the cloud-dev scripts to help with manual testing. I've 
been doing this mostly as part of SOLR-5656, but I'd like to spin in out into 
it's own issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6103) Add DateRangeField

2014-08-01 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083031#comment-14083031
 ] 

Jack Krupansky commented on SOLR-6103:
--

Once nuance is for the end of the range - [2010 TO 2012] should expand the 
starting date to the beginning of that period, but expand the ending date to 
the end of that period (2012-12-31T23:59:59.999Z). And [2010 TO 2012} would 
expand the ending date to the beginning (rather than the ending) of the period 
(2012-01-01T00:00:00Z), with the exclusive flag set as well.


 Add DateRangeField
 --

 Key: SOLR-6103
 URL: https://issues.apache.org/jira/browse/SOLR-6103
 Project: Solr
  Issue Type: New Feature
  Components: spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.0

 Attachments: SOLR-6103.patch


 LUCENE-5648 introduced a date range index  search capability in the spatial 
 module. This issue is for a corresponding Solr FieldType to be named 
 DateRangeField. LUCENE-5648 includes a parseCalendar(String) method that 
 parses a superset of Solr's strict date format.  It also parses partial dates 
 (e.g.: 2014-10  has month specificity), and the trailing 'Z' is optional, and 
 a leading +/- may be present (minus indicates BC era), and * means 
 all-time.  The proposed field type would use it to parse a string and also 
 both ends of a range query, but furthermore it will also allow an arbitrary 
 range query of the form {{calspec TO calspec}} such as:
 {noformat}2000 TO 2014-05-21T10{noformat}
 Which parses as the year 2000 thru 2014 May 21st 10am (GMT). 
 I suggest this syntax because it is aligned with Lucene's range query syntax. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5156) CompressingTermVectors termsEnum should probably not support seek-by-ord

2014-08-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083029#comment-14083029
 ] 

Robert Muir commented on LUCENE-5156:
-

Personally i would do such a thing with a FilterTerms + FilterReader. you just 
check if docid == lastDocID and you have your cache thing.

But i dont think it should be in the default codec. I also happen to think term 
vectors arent a good datastructure for highlighting anyway.

 CompressingTermVectors termsEnum should probably not support seek-by-ord
 

 Key: LUCENE-5156
 URL: https://issues.apache.org/jira/browse/LUCENE-5156
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 4.5, 5.0

 Attachments: LUCENE-5156.patch


 Just like term vectors before it, it has a O(n) seek-by-term. 
 But this one also advertises a seek-by-ord, only this is also O(n).
 This could cause e.g. checkindex to be very slow, because if termsenum 
 supports ord it does a bunch of seeking tests. (Another solution would be to 
 leave it, and add a boolean so checkindex never does seeking tests for term 
 vectors, only real fields).
 However, I think its also kinda a trap, in my opinion if seek-by-ord is 
 supported anywhere, you kinda expect it to be faster than linear time...?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6313) Improve SolrCloud cloud-dev scripts.

2014-08-01 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083049#comment-14083049
 ] 

Mark Miller commented on SOLR-6313:
---

I've removed the use port 8983 as a special server 1 port.

Now, a 'cloud view only' Solr is available at port 8900. It won't register as a 
live_node, but you can use it to view the cluster at 
http://localhost:8900/solr/#/~cloud. I like to use an auto refresh plugin to 
monitor.

The rest of the servers are at ports 8901, 8902, etc.

I've also done work on the stop scripts to help make it harder to start again 
before things are really stopped, forcing you to kill java processes manually.

Other little improvements here and there.

 Improve SolrCloud cloud-dev scripts.
 

 Key: SOLR-6313
 URL: https://issues.apache.org/jira/browse/SOLR-6313
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller

 I've been improving the cloud-dev scripts to help with manual testing. I've 
 been doing this mostly as part of SOLR-5656, but I'd like to spin in out into 
 it's own issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5156) CompressingTermVectors termsEnum should probably not support seek-by-ord

2014-08-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083065#comment-14083065
 ] 

Robert Muir commented on LUCENE-5156:
-

I also think its ok if we fix the codec to have a faster seekExact (not by 
copying stuff into a large array on the first call though, just by fixing 
datastructure / how it access data).

That would solve the actual problem here you have in a clean way.

 CompressingTermVectors termsEnum should probably not support seek-by-ord
 

 Key: LUCENE-5156
 URL: https://issues.apache.org/jira/browse/LUCENE-5156
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 4.5, 5.0

 Attachments: LUCENE-5156.patch


 Just like term vectors before it, it has a O(n) seek-by-term. 
 But this one also advertises a seek-by-ord, only this is also O(n).
 This could cause e.g. checkindex to be very slow, because if termsenum 
 supports ord it does a bunch of seeking tests. (Another solution would be to 
 leave it, and add a boolean so checkindex never does seeking tests for term 
 vectors, only real fields).
 However, I think its also kinda a trap, in my opinion if seek-by-ord is 
 supported anywhere, you kinda expect it to be faster than linear time...?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5156) CompressingTermVectors termsEnum should probably not support seek-by-ord

2014-08-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083064#comment-14083064
 ] 

David Smiley commented on LUCENE-5156:
--

I agree on the caching thing -- that is, what I said in which you ask for Terms 
for the same document again.  Never-mind that part -- as I thought about it I 
realized I didn't need that after all.

bq. But i dont think it should be in the default codec. I also happen to think 
term vectors arent a good datastructure for highlighting anyway.

The default highlighter fully respects the positions and other aspects of the 
user's query, unlike the other highlighters.  Some applications demand that a 
highlight is accurate to the query, even if the query uses custom span queries 
that do tricks with payloads, etc.  It would be nice if the other highlighters 
supported accurate highlights for such queries but they don't, so today, this 
is the applicable one for accurate highlights for complex queries.  The default 
highlighter requires a Terms instance reflecting the current document -- it 
currently gets it via a re-inverting into a MemoryIndex but it can be hacked to 
accept a Terms directly from term vectors.  

So you don't like the idea of enhancing performance of term vector seekCeil in 
the default codec?  Is that a -1 or -0?  This change I propose seems harmless 
-- the code would not create  build up the new offset array if consuming code 
doesn't call seekCeil or the ord methods.

 CompressingTermVectors termsEnum should probably not support seek-by-ord
 

 Key: LUCENE-5156
 URL: https://issues.apache.org/jira/browse/LUCENE-5156
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 4.5, 5.0

 Attachments: LUCENE-5156.patch


 Just like term vectors before it, it has a O(n) seek-by-term. 
 But this one also advertises a seek-by-ord, only this is also O(n).
 This could cause e.g. checkindex to be very slow, because if termsenum 
 supports ord it does a bunch of seeking tests. (Another solution would be to 
 leave it, and add a boolean so checkindex never does seeking tests for term 
 vectors, only real fields).
 However, I think its also kinda a trap, in my opinion if seek-by-ord is 
 supported anywhere, you kinda expect it to be faster than linear time...?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5156) CompressingTermVectors termsEnum should probably not support seek-by-ord

2014-08-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083072#comment-14083072
 ] 

Robert Muir commented on LUCENE-5156:
-

Sorry David, its not about being against speeding something up, its about how 
you propose implementing it.

Copying all the data from the entire document into another array on the first 
read for the doc, that's a really trashy thing to do here. Instead, we should 
just fix it correctly, so that seekCeil() is not linear time. 

 CompressingTermVectors termsEnum should probably not support seek-by-ord
 

 Key: LUCENE-5156
 URL: https://issues.apache.org/jira/browse/LUCENE-5156
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 4.5, 5.0

 Attachments: LUCENE-5156.patch


 Just like term vectors before it, it has a O(n) seek-by-term. 
 But this one also advertises a seek-by-ord, only this is also O(n).
 This could cause e.g. checkindex to be very slow, because if termsenum 
 supports ord it does a bunch of seeking tests. (Another solution would be to 
 leave it, and add a boolean so checkindex never does seeking tests for term 
 vectors, only real fields).
 However, I think its also kinda a trap, in my opinion if seek-by-ord is 
 supported anywhere, you kinda expect it to be faster than linear time...?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6313) Improve SolrCloud cloud-dev scripts.

2014-08-01 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-6313:
--

Attachment: SOLR-6313.patch

 Improve SolrCloud cloud-dev scripts.
 

 Key: SOLR-6313
 URL: https://issues.apache.org/jira/browse/SOLR-6313
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6313.patch


 I've been improving the cloud-dev scripts to help with manual testing. I've 
 been doing this mostly as part of SOLR-5656, but I'd like to spin in out into 
 it's own issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6103) Add DateRangeField

2014-08-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083071#comment-14083071
 ] 

David Smiley commented on SOLR-6103:


I like that idea; it's inclusive right now but doesn't support inclusive via 
'}'.

 Add DateRangeField
 --

 Key: SOLR-6103
 URL: https://issues.apache.org/jira/browse/SOLR-6103
 Project: Solr
  Issue Type: New Feature
  Components: spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.0

 Attachments: SOLR-6103.patch


 LUCENE-5648 introduced a date range index  search capability in the spatial 
 module. This issue is for a corresponding Solr FieldType to be named 
 DateRangeField. LUCENE-5648 includes a parseCalendar(String) method that 
 parses a superset of Solr's strict date format.  It also parses partial dates 
 (e.g.: 2014-10  has month specificity), and the trailing 'Z' is optional, and 
 a leading +/- may be present (minus indicates BC era), and * means 
 all-time.  The proposed field type would use it to parse a string and also 
 both ends of a range query, but furthermore it will also allow an arbitrary 
 range query of the form {{calspec TO calspec}} such as:
 {noformat}2000 TO 2014-05-21T10{noformat}
 Which parses as the year 2000 thru 2014 May 21st 10am (GMT). 
 I suggest this syntax because it is aligned with Lucene's range query syntax. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5156) CompressingTermVectors termsEnum should probably not support seek-by-ord

2014-08-01 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083079#comment-14083079
 ] 

Robert Muir commented on LUCENE-5156:
-

Also, this should be discussed somewhere else than on an unrelated, closed, 
year-old issue, like on its own issue. (Sorry, its not really related to 
seek-by-ord, your problem is a more general one, and it wasnt created by this 
issue, nor even by compressing term vectors but is older than that... this 
issue is closed)

 CompressingTermVectors termsEnum should probably not support seek-by-ord
 

 Key: LUCENE-5156
 URL: https://issues.apache.org/jira/browse/LUCENE-5156
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 4.5, 5.0

 Attachments: LUCENE-5156.patch


 Just like term vectors before it, it has a O(n) seek-by-term. 
 But this one also advertises a seek-by-ord, only this is also O(n).
 This could cause e.g. checkindex to be very slow, because if termsenum 
 supports ord it does a bunch of seeking tests. (Another solution would be to 
 leave it, and add a boolean so checkindex never does seeking tests for term 
 vectors, only real fields).
 However, I think its also kinda a trap, in my opinion if seek-by-ord is 
 supported anywhere, you kinda expect it to be faster than linear time...?



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6275) Improve accuracy of QTime reporting

2014-08-01 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-6275:
-

Assignee: Mark Miller

 Improve accuracy of QTime reporting
 ---

 Key: SOLR-6275
 URL: https://issues.apache.org/jira/browse/SOLR-6275
 Project: Solr
  Issue Type: Improvement
  Components: search
Reporter: Ramkumar Aiyengar
Assignee: Mark Miller
Priority: Minor

 Currently, {{QTime}} uses {{currentTimeMillis}} instead of {{nanoTime}} and 
 hence is not suitable for time measurements. Further, it is really started 
 after all the dispatch logic in {{SolrDispatchFilter}} (same with the top 
 level timing reported by {{debug=timing}}) which may or may not be expensive, 
 and hence may not fully represent the time taken by the search. This is to 
 remedy both cases.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6103) Add DateRangeField

2014-08-01 Thread Jack Krupansky (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083031#comment-14083031
 ] 

Jack Krupansky edited comment on SOLR-6103 at 8/1/14 10:42 PM:
---

One nuance is for the end of the range - [2010 TO 2012] should expand the 
starting date to the beginning of that period, but expand the ending date to 
the end of that period (2012-12-31T23:59:59.999Z). And [2010 TO 2012} would 
expand the ending date to the beginning (rather than the ending) of the period 
(2012-01-01T00:00:00Z), with the exclusive flag set as well.



was (Author: jkrupan):
Once nuance is for the end of the range - [2010 TO 2012] should expand the 
starting date to the beginning of that period, but expand the ending date to 
the end of that period (2012-12-31T23:59:59.999Z). And [2010 TO 2012} would 
expand the ending date to the beginning (rather than the ending) of the period 
(2012-01-01T00:00:00Z), with the exclusive flag set as well.


 Add DateRangeField
 --

 Key: SOLR-6103
 URL: https://issues.apache.org/jira/browse/SOLR-6103
 Project: Solr
  Issue Type: New Feature
  Components: spatial
Reporter: David Smiley
Assignee: David Smiley
 Fix For: 5.0

 Attachments: SOLR-6103.patch


 LUCENE-5648 introduced a date range index  search capability in the spatial 
 module. This issue is for a corresponding Solr FieldType to be named 
 DateRangeField. LUCENE-5648 includes a parseCalendar(String) method that 
 parses a superset of Solr's strict date format.  It also parses partial dates 
 (e.g.: 2014-10  has month specificity), and the trailing 'Z' is optional, and 
 a leading +/- may be present (minus indicates BC era), and * means 
 all-time.  The proposed field type would use it to parse a string and also 
 both ends of a range query, but furthermore it will also allow an arbitrary 
 range query of the form {{calspec TO calspec}} such as:
 {noformat}2000 TO 2014-05-21T10{noformat}
 Which parses as the year 2000 thru 2014 May 21st 10am (GMT). 
 I suggest this syntax because it is aligned with Lucene's range query syntax. 
  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6313) Improve SolrCloud cloud-dev scripts.

2014-08-01 Thread Vamsee Yarlagadda (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083140#comment-14083140
 ] 

Vamsee Yarlagadda commented on SOLR-6313:
-

+1 on the patch. I tried running the scripts in my local system and everything 
works as expected.
Thanks for the effort, Mark.


 Improve SolrCloud cloud-dev scripts.
 

 Key: SOLR-6313
 URL: https://issues.apache.org/jira/browse/SOLR-6313
 Project: Solr
  Issue Type: Improvement
  Components: SolrCloud
Reporter: Mark Miller
Assignee: Mark Miller
 Attachments: SOLR-6313.patch


 I've been improving the cloud-dev scripts to help with manual testing. I've 
 been doing this mostly as part of SOLR-5656, but I'd like to spin in out into 
 it's own issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene versioning logic

2014-08-01 Thread Ryan Ernst
There has been a lot of heated discussion recently about version
tracking in Lucene [1] [2].  I wanted to have a fresh discussion
outside of jira to give a full description of the current state of
things, the problems I have heard, and a proposed solution.

CURRENT

We have 2 pieces of code that handle “versioning.”  The first is
Constants.LUCENE_MAIN_VERSION, which is written to the SegmentsInfo
for each segment.  This is a string version which is used to detect
when the current version of lucene is newer than the version that
wrote the segment (and how/if an upgrade to to a newer codec should be
done). There is some complication with the “display” version and
non-display version, which are distinguished by whether the version of
lucene was an official release, or an alpha/beta version (which was
added specifically for the 4.0.0 release ramp up).  This string
version also has its own parsing and comparison methods.

The second piece of versioning code is in Version.java, which is an
enum used by analyzers to maintain backwards compatible behavior given
a specific version of lucene.  The enum only contains values for dot
releases of lucene, not bug fixes (which was what spurred the recent
discussions over version). Analyzers’ constructors take a required
Version parameter, which is only actually used by the few analyzers
that have changed behavior recently.  Version.java contains a separate
version parsing and comparison methods.


CONCERNS

* Having 2 different pieces of code that do very similar things is
confusing for development.  Very few developers appear to really
understand the current system (especially when trying to understand
the alpha/beta setup).

* Users are generally confused by the Version passed to analyzers: I
know I was when I first started working with Lucene, and
Version.CURRENT_VERSION was deprecated because users used that without
understanding the implications.

* Most analyzers currently have dead code constructors, since they
never make use of Version.  There are also a lot of classes used by
analyzers which contain similar dead code.

* Backwards compatibility needs to be handled in some fashion, to
ensure users have a path to upgrade from one version of lucene to
another, without requiring immediate re-indexing.


PROPOSAL

I propose the following:

* Consolidate all version related enumeration, including reading and
writing string versions, into Version.java.  Have a static method that
returns the current lucene version (replacing
Constants.LUCENE_MAIN_VERSION).

* Make bug fix releases first class in the enumeration, so that they
can be distinguished for any compatibility issues that come up.

* Remove all snapshot/alpha/beta versioning logic.  Alpha/beta was
really only necessary for 4.0 because of the extreme changes that were
being made.  The system is much more stable now, and 5.0 should not
require preview releases, IMO.  I don’t think snapshots should be a
concern because any user building an index from an unreleased build
(which they built themselves) is just asking for trouble.  They do so
at their own risk (of figuring out how to upgrade their indexes if
they are not trash-able).  Backwards compatibility can be handled by
adding the alpha/beta/final versions of 4.0 to the enum (and special
parsing logic for this).  If lucene changes so much that we need
alpha/beta type discrimination in the future, we can revisit the
system if simply having extra versions in the enum won't work.

* Analyzers constructors should have Version removed, and a setter
should be added which allows production users to set the version used.
This way any analyzers can still use version if it is set to something
other than current (which would be the default), but users simply
prototyping do not need to worry about it.

* Classes that analyzers use, which take Version, should have Version
removed, and the analyzers should choose which settings/variants of
those classes to use based on the version they have set. In other
words, all version variant logic should be contained within the
analyzers.  For example, Lucene47WordDelimiterFilter, or
StandardAnalyzer can take the unicode version.
Factories could still take Version (e.g. TokenizerFactory,
TokenFilterFactory, etc) to produce the correct component (so nothing
will change for solr in this regard).

I’m sure not everyone will be happy with what I have proposed, but I’m
hoping we can work out a solution together, and then implement in a
team-like fashion, the way I have seen the community work in the past,
and I hope to see again in the future.

Thanks
Ryan

[1] https://issues.apache.org/jira/browse/LUCENE-5850
[2] https://issues.apache.org/jira/browse/LUCENE-5859

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-4.x-Java7 - Build # 2044 - Still Failing

2014-08-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-4.x-Java7/2044/

1 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
We have a failed SPLITSHARD task

Stack Trace:
java.lang.AssertionError: We have a failed SPLITSHARD task
at 
__randomizedtesting.SeedInfo.seed([25252F3F7F3BFEED:A4C3A12708649ED1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testTaskExclusivity(MultiThreadedOCPTest.java:125)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.doTest(MultiThreadedOCPTest.java:71)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-6314) Multi-threaded facet counts differ when SolrCloud has 1 shard

2014-08-01 Thread Vamsee Yarlagadda (JIRA)
Vamsee Yarlagadda created SOLR-6314:
---

 Summary: Multi-threaded facet counts differ when SolrCloud has 1 
shard
 Key: SOLR-6314
 URL: https://issues.apache.org/jira/browse/SOLR-6314
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other, SolrCloud
Affects Versions: 5.0
Reporter: Vamsee Yarlagadda


I am trying to work with multi-threaded faceting on SolrCloud and in the 
process i was hit by some issues.

I am currently running the below upstream test on different SolrCloud 
configurations and i am getting a different result set per configuration.
https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/TestFaceting.java#L654

Setup:
- *Indexed 50 docs into SolrCloud.*

- *If the SolrCloud has only 1 shard, the facet field query has the below 
output (which matches with the expected upstream test output - # facet fields ~ 
50).*

{code}
$ curl  
http://localhost:8983/solr/collection1/select?facet=truefl=idindent=trueq=id%3A*facet.limit=-1facet.threads=1000facet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f0_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f1_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f2_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f3_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f4_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f5_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f6_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f7_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f8_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsfacet.field=f9_wsrows=1wt=xml;

?xml version=1.0 encoding=UTF-8?
response
lst name=responseHeader
  int name=status0/int
  int name=QTime21/int
  lst name=params
str name=facettrue/str
str name=flid/str
str name=indenttrue/str
str name=qid:*/str
str name=facet.limit-1/str
str name=facet.threads1000/str
arr name=facet.field
  strf0_ws/str
  strf0_ws/str
  strf0_ws/str
  strf0_ws/str
  strf0_ws/str
  strf1_ws/str
  strf1_ws/str
  strf1_ws/str
  strf1_ws/str
  strf1_ws/str
  strf2_ws/str
  strf2_ws/str
  strf2_ws/str
  strf2_ws/str
  strf2_ws/str
  strf3_ws/str
  strf3_ws/str
  strf3_ws/str
  strf3_ws/str
  strf3_ws/str
  strf4_ws/str
  strf4_ws/str
  strf4_ws/str
  strf4_ws/str
  strf4_ws/str
  strf5_ws/str
  strf5_ws/str
  strf5_ws/str
  strf5_ws/str
  strf5_ws/str
  strf6_ws/str
  strf6_ws/str
  strf6_ws/str
  strf6_ws/str
  strf6_ws/str
  strf7_ws/str
  strf7_ws/str
  strf7_ws/str
  strf7_ws/str
  strf7_ws/str
  strf8_ws/str
  strf8_ws/str
  strf8_ws/str
  strf8_ws/str
  strf8_ws/str
  strf9_ws/str
  strf9_ws/str
  strf9_ws/str
  strf9_ws/str
  strf9_ws/str
/arr
str name=wtxml/str
str name=rows1/str
  /lst
/lst
result name=response numFound=50 start=0
  doc
float name=id0.0/float/doc
/result
lst name=facet_counts
  lst name=facet_queries/
  lst name=facet_fields
lst name=f0_ws
  int name=zero_125/int
  int name=zero_225/int
/lst
lst name=f0_ws
  int name=zero_125/int
  int name=zero_225/int
/lst
lst name=f0_ws
  int name=zero_125/int
  int name=zero_225/int
/lst
lst name=f0_ws
  int name=zero_125/int
  int name=zero_225/int
/lst
lst name=f0_ws
  int name=zero_125/int
  int name=zero_225/int
/lst
lst name=f1_ws
  int name=one_133/int
  int name=one_317/int
/lst
lst name=f1_ws
  int name=one_133/int
  int name=one_317/int
/lst
lst name=f1_ws
  int name=one_133/int
  int name=one_317/int
/lst
lst name=f1_ws
  int name=one_133/int
  int name=one_317/int
/lst
lst name=f1_ws
  int name=one_133/int
  int name=one_317/int
/lst
lst name=f2_ws
  int name=two_137/int
  int name=two_413/int
/lst
lst name=f2_ws
  int name=two_137/int
  int name=two_413/int
/lst
lst name=f2_ws
  int name=two_137/int
  int name=two_413/int
/lst
lst name=f2_ws
  int name=two_137/int
  int name=two_413/int
/lst
lst name=f2_ws
  int name=two_137/int
  int name=two_413/int
/lst
lst name=f3_ws
  int name=three_140/int
  int name=three_510/int
/lst

lst name=f3_ws
  int name=three_140/int
  int name=three_510/int
/lst
lst name=f3_ws
  int name=three_140/int
  int 

[jira] [Updated] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-08-01 Thread David Boychuck (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Boychuck updated SOLR-6066:
-

Attachment: SOLR-6066.patch

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6299) Facet count on facet queries returns different results if #shards 1

2014-08-01 Thread Vamsee Yarlagadda (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vamsee Yarlagadda resolved SOLR-6299.
-

Resolution: Not a Problem

Thanks for the insight, [~tomasflobbe].
Looks like we either have to do custom sharding to make sure all the docs which 
relevant to the query are in the same shard to run a grouping request or just 
have a single sharded system.

Resolving this Jira as not a bug. 

 Facet count on facet queries returns different results if #shards  1
 -

 Key: SOLR-6299
 URL: https://issues.apache.org/jira/browse/SOLR-6299
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 5.0
Reporter: Vamsee Yarlagadda
  Labels: faceting

 I am trying to run some facet counts on facet queries and looks like i am 
 getting different counts if i use 1 shards in the SolrCloud cluster.
 Here is the upstream unit test:
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/SimpleFacetsTest.java#L173
 Setup:
 * Ingested 5 solr docs.
 {code}
 {
   responseHeader: {
 status: 0,
 QTime: 22,
 params: {
   indent: true,
   q: *:*,
   _: 1406346687337,
   wt: json
 }
   },
   response: {
 numFound: 5,
 start: 0,
 maxScore: 1,
 docs: [
   {
 id: 2004,
 range_facet_l: [
   2004
 ],
 hotel_s1: b,
 airport_s1: ams,
 duration_i1: 5,
 _version_: 1474661321774465000,
 timestamp: 2014-07-26T03:50:27.975Z,
 multiDefault: [
   muLti-Default
 ],
 intDefault: 42
   },
   {
 id: 2000,
 range_facet_l: [
   2000
 ],
 hotel_s1: a,
 airport_s1: ams,
 duration_i1: 5,
 _version_: 1474661323604230100,
 timestamp: 2014-07-26T03:50:29.734Z,
 multiDefault: [
   muLti-Default
 ],
 intDefault: 42
   },
   {
 id: 2003,
 range_facet_l: [
   2003
 ],
 hotel_s1: b,
 airport_s1: ams,
 duration_i1: 5,
 _version_: 1474661326312702000,
 timestamp: 2014-07-26T03:50:32.317Z,
 multiDefault: [
   muLti-Default
 ],
 intDefault: 42
   },
   {
 id: 2001,
 range_facet_l: [
   2001
 ],
 hotel_s1: a,
 airport_s1: dus,
 duration_i1: 10,
 _version_: 1474661326389248000,
 timestamp: 2014-07-26T03:50:32.375Z,
 multiDefault: [
   muLti-Default
 ],
 intDefault: 42
   },
   {
 id: 2002,
 range_facet_l: [
   2002
 ],
 hotel_s1: b,
 airport_s1: ams,
 duration_i1: 10,
 _version_: 1474661326464745500,
 timestamp: 2014-07-26T03:50:32.446Z,
 multiDefault: [
   muLti-Default
 ],
 intDefault: 42
   }
 ]
   }
 }
 {code}
 Here is the query being run:
 {code}
 Test code:
 assertQ(
 req(
 q, *:*,
 fq, id:[2000 TO 2004],
 group, true,
 group.facet, true,
 group.field, hotel_s1,
 facet, true,
 facet.limit, facetLimit,
 facet.query, airport_s1:ams
 ),
 //lst[@name='facet_queries']/int[@name='airport_s1:ams'][.='2']
 );
 $ curl  
 http://localhost:8983/solr/collection1/select?facet=truefacet.query=airport_s1%3Aamsq=*%3A*facet.limit=-100group.field=hotel_s1group=truegroup.facet=truefq=id%3A%5B2000+TO+2004%5Dindent=truewt=xml;
  
 {code}
 Now, if i issue a query statement - On *1* shard system (Works as expected)
 {code}
 $ curl  
 http://localhost:8983/solr/collection1/select?facet=truefacet.query=airport_s1%3Aamsq=*%3A*facet.limit=-100group.field=hotel_s1group=truegroup.facet=truefq=id%3A%5B2000+TO+2004%5Dindent=truewt=xml;
  
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime17/int
   lst name=params
 str name=facettrue/str
 str name=indenttrue/str
 str name=facet.queryairport_s1:ams/str
 str name=q*:*/str
 str name=facet.limit-100/str
 str name=group.fieldhotel_s1/str
 str name=grouptrue/str
 str name=wtxml/str
 str name=fqid:[2000 TO 2004]/str
 str name=group.facettrue/str
   /lst
 /lst
 lst name=grouped
   lst name=hotel_s1
 int name=matches5/int
 arr name=groups
   lst
 str name=groupValuea/str
 result name=doclist numFound=2 start=0
   doc
 int name=id2001/int
 arr name=range_facet_l
   long2001/long
 /arr
 str name=hotel_s1a/str
 

[jira] [Commented] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-08-01 Thread David Boychuck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083269#comment-14083269
 ] 

David Boychuck commented on SOLR-6066:
--

I uploaded a patch. The change is basically to move store the docId's in shared 
memory as they are collected and then perform the logic to append them to their 
correct positions in the finish method. This seems to be working for me for now 
until Joel refactors.

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6300) facet.mincount fails to work if SolrCloud distrib=true is set

2014-08-01 Thread Vamsee Yarlagadda (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vamsee Yarlagadda updated SOLR-6300:


Component/s: SearchComponents - other

 facet.mincount fails to work if SolrCloud distrib=true is set
 -

 Key: SOLR-6300
 URL: https://issues.apache.org/jira/browse/SOLR-6300
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other, SolrCloud
Affects Versions: 5.0
Reporter: Vamsee Yarlagadda

 I notice that using facet.mincount in SolrCloud mode with distrib=true fails 
 to filter the facets based on the count. However, the same query with 
 distrib=false works as expected.
 * Indexed some data as provided by the upstream test.
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/SimpleFacetsTest.java#L633
 * Test being run:
 https://github.com/apache/lucene-solr/blob/trunk/solr/core/src/test/org/apache/solr/request/SimpleFacetsTest.java#L657
 * Running in SolrCloud mode with distrib=false (facet.mincount works as 
 expected)
 {code}
 $ curl  
 http://search-testing-c5-3.ent.cloudera.com:8983/solr/simple_faceting_coll/select?facet.date.start=1976-07-01T00%3A00%3A00.000Zfacet=truefacet.mincount=1q=*%3A*facet.date=bdayfacet.date.other=allfacet.date.gap=%2B1DAYfacet.date.end=1976-07-01T00%3A00%3A00.000Z%2B1MONTHrows=0indent=truewt=xmldistrib=false;
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime3/int
   lst name=params
 str name=facet.date.start1976-07-01T00:00:00.000Z/str
 str name=facettrue/str
 str name=indenttrue/str
 str name=facet.mincount1/str
 str name=q*:*/str
 str name=facet.datebday/str
 str name=distribfalse/str
 str name=facet.date.gap+1DAY/str
 str name=facet.date.otherall/str
 str name=wtxml/str
 str name=facet.date.end1976-07-01T00:00:00.000Z+1MONTH/str
 str name=rows0/str
   /lst
 /lst
 result name=response numFound=33 start=0
 /result
 lst name=facet_counts
   lst name=facet_queries/
   lst name=facet_fields/
   lst name=facet_dates
 lst name=bday
   int name=1976-07-03T00:00:00Z1/int
   int name=1976-07-04T00:00:00Z1/int
   int name=1976-07-05T00:00:00Z1/int
   int name=1976-07-13T00:00:00Z1/int
   int name=1976-07-15T00:00:00Z1/int
   int name=1976-07-21T00:00:00Z1/int
   int name=1976-07-30T00:00:00Z1/int
   str name=gap+1DAY/str
   date name=start1976-07-01T00:00:00Z/date
   date name=end1976-08-01T00:00:00Z/date
   int name=before2/int
   int name=after0/int
   int name=between6/int
 /lst
   /lst
   lst name=facet_ranges/
 /lst
 /response
 {code}
 * SolrCloud mode with distrib=true (facet.mincount fails to show effect)
 {code}
 $ curl  
 http://search-testing-c5-3.ent.cloudera.com:8983/solr/simple_faceting_coll/select?facet.date.start=1976-07-01T00%3A00%3A00.000Zfacet=truefacet.mincount=1q=*%3A*facet.date=bdayfacet.date.other=allfacet.date.gap=%2B1DAYfacet.date.end=1976-07-01T00%3A00%3A00.000Z%2B1MONTHrows=0indent=truewt=xmldistrib=true;
 ?xml version=1.0 encoding=UTF-8?
 response
 lst name=responseHeader
   int name=status0/int
   int name=QTime12/int
   lst name=params
 str name=facet.date.start1976-07-01T00:00:00.000Z/str
 str name=facettrue/str
 str name=indenttrue/str
 str name=facet.mincount1/str
 str name=q*:*/str
 str name=facet.datebday/str
 str name=distribtrue/str
 str name=facet.date.gap+1DAY/str
 str name=facet.date.otherall/str
 str name=wtxml/str
 str name=facet.date.end1976-07-01T00:00:00.000Z+1MONTH/str
 str name=rows0/str
   /lst
 /lst
 result name=response numFound=63 start=0 maxScore=1.0
 /result
 lst name=facet_counts
   lst name=facet_queries/
   lst name=facet_fields/
   lst name=facet_dates
 lst name=bday
   int name=1976-07-01T00:00:00Z0/int
   int name=1976-07-02T00:00:00Z0/int
   int name=1976-07-03T00:00:00Z2/int
   int name=1976-07-04T00:00:00Z2/int
   int name=1976-07-05T00:00:00Z2/int
   int name=1976-07-06T00:00:00Z0/int
   int name=1976-07-07T00:00:00Z0/int
   int name=1976-07-08T00:00:00Z0/int
   int name=1976-07-09T00:00:00Z0/int
   int name=1976-07-10T00:00:00Z0/int
   int name=1976-07-11T00:00:00Z0/int
   int name=1976-07-12T00:00:00Z1/int
   int name=1976-07-13T00:00:00Z1/int
   int name=1976-07-14T00:00:00Z0/int
   int name=1976-07-15T00:00:00Z2/int
   int name=1976-07-16T00:00:00Z0/int
   int name=1976-07-17T00:00:00Z0/int
   int name=1976-07-18T00:00:00Z0/int
   int name=1976-07-19T00:00:00Z0/int
   int name=1976-07-20T00:00:00Z0/int
   int name=1976-07-21T00:00:00Z1/int
   int name=1976-07-22T00:00:00Z0/int
   int name=1976-07-23T00:00:00Z0/int
 

[JENKINS] Lucene-Solr-4.x-MacOSX (64bit/jdk1.8.0) - Build # 1710 - Failure!

2014-08-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/1710/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  org.apache.solr.schema.TestCloudSchemaless.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: 
https://127.0.0.1:58462/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:58462/collection1
at 
__randomizedtesting.SeedInfo.seed([F697FF42FF3A747A:7771715A88651446]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:561)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:68)
at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:54)
at 
org.apache.solr.schema.TestCloudSchemaless.doTest(TestCloudSchemaless.java:140)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
at sun.reflect.GeneratedMethodAccessor48.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
   

[jira] [Commented] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-08-01 Thread David Boychuck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083277#comment-14083277
 ] 

David Boychuck commented on SOLR-6066:
--

I wasn't sure if the collect method was threaded. If it is the code will need 
to be updated to use a thread safe collection instead of a HashSet

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-08-01 Thread David Boychuck (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Boychuck updated SOLR-6066:
-

Attachment: (was: SOLR-6066.patch)

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-08-01 Thread David Boychuck (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Boychuck updated SOLR-6066:
-

Attachment: SOLR-6066.patch

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_11) - Build # 4224 - Failure!

2014-08-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4224/
Java: 64bit/jdk1.8.0_11 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  
org.apache.lucene.index.TestMixedDocValuesUpdates.testManyReopensAndFields

Error Message:
MockDirectoryWrapper: cannot close: there are still open files: 
{_b_1_Lucene49_0.dvd=1, _b.cfs=1}

Stack Trace:
java.lang.RuntimeException: MockDirectoryWrapper: cannot close: there are still 
open files: {_b_1_Lucene49_0.dvd=1, _b.cfs=1}
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:672)
at org.apache.lucene.util.IOUtils.close(IOUtils.java:77)
at 
org.apache.lucene.index.TestMixedDocValuesUpdates.testManyReopensAndFields(TestMixedDocValuesUpdates.java:153)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: unclosed IndexInput: _b.cfs
at 
org.apache.lucene.store.MockDirectoryWrapper.addFileHandle(MockDirectoryWrapper.java:559)
at 
org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:603)
at 

[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 595 - Still Failing

2014-08-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/595/

1 tests failed.
REGRESSION:  
org.apache.lucene.codecs.idversion.TestIDVersionPostingsFormat.testRandom

Error Message:
String index out of range: -1

Stack Trace:
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
at 
__randomizedtesting.SeedInfo.seed([2E117A0408B6514F:5C5D5F0BB9D6E73C]:0)
at java.lang.String.substring(String.java:1871)
at 
org.apache.lucene.codecs.idversion.TestIDVersionPostingsFormat$4.next(TestIDVersionPostingsFormat.java:148)
at 
org.apache.lucene.codecs.idversion.TestIDVersionPostingsFormat.testRandom(TestIDVersionPostingsFormat.java:283)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 10211 lines...]
   [junit4] Suite: 
org.apache.lucene.codecs.idversion.TestIDVersionPostingsFormat
   [junit4]   2 NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2 NOTE: reproduce with: ant test  
-Dtestcase=TestIDVersionPostingsFormat -Dtests.method=testRandom 
-Dtests.seed=2E117A0408B6514F -Dtests.multiplier=2 

[jira] [Comment Edited] (SOLR-6066) CollapsingQParserPlugin + Elevation does not respects fq (filter query)

2014-08-01 Thread David Boychuck (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14083269#comment-14083269
 ] 

David Boychuck edited comment on SOLR-6066 at 8/2/14 3:19 AM:
--

I uploaded a patch. The change is basically to store the docId's in shared 
memory as they are collected and then perform the logic to append elvated doc 
to their correct positions (if the documents have been collected) in the finish 
method. This seems to be working for me for now until Joel refactors.


was (Author: dboychuck):
I uploaded a patch. The change is basically to move store the docId's in shared 
memory as they are collected and then perform the logic to append them to their 
correct positions in the finish method. This seems to be working for me for now 
until Joel refactors.

 CollapsingQParserPlugin + Elevation does not respects fq (filter query) 
 --

 Key: SOLR-6066
 URL: https://issues.apache.org/jira/browse/SOLR-6066
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.8
Reporter: Herb Jiang
Assignee: Joel Bernstein
 Fix For: 4.9

 Attachments: SOLR-6066.patch, TestCollapseQParserPlugin.java


 QueryElevationComponent respects the fq parameter. But when use 
 CollapsingQParserPlugin with QueryElevationComponent, additional fq has no 
 effect.
 I use following test case to show this issue. (It will failed)
 {code:java}
 String[] doc = {id,1, term_s, , group_s, group1, 
 category_s, cat2, test_ti, 5, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc));
 assertU(commit());
 String[] doc1 = {id,2, term_s,, group_s, group1, 
 category_s, cat2, test_ti, 50, test_tl, 100, test_tf, 200};
 assertU(adoc(doc1));
 String[] doc2 = {id,3, term_s, , test_ti, 5000, 
 test_tl, 100, test_tf, 200};
 assertU(adoc(doc2));
 assertU(commit());
 String[] doc3 = {id,4, term_s, , test_ti, 500, test_tl, 
 1000, test_tf, 2000};
 assertU(adoc(doc3));
 String[] doc4 = {id,5, term_s, , group_s, group2, 
 category_s, cat1, test_ti, 4, test_tl, 10, test_tf, 2000};
 assertU(adoc(doc4));
 assertU(commit());
 String[] doc5 = {id,6, term_s,, group_s, group2, 
 category_s, cat1, test_ti, 10, test_tl, 100, test_tf, 200};
 assertU(adoc(doc5));
 assertU(commit());
 //Test additional filter query when using collapse
 params = new ModifiableSolrParams();
 params.add(q, );
 params.add(fq, {!collapse field=group_s});
 params.add(fq, category_s:cat1);
 params.add(defType, edismax);
 params.add(bf, field(test_ti));
 params.add(qf, term_s);
 params.add(qt, /elevate);
 params.add(elevateIds, 2);
 assertQ(req(params), *[count(//doc)=1],
 //result/doc[1]/float[@name='id'][.='6.0']);
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-trunk-Java7 - Build # 4787 - Still Failing

2014-08-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-trunk-Java7/4787/

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.handler.TestReplicationHandlerBackup: 1) Thread[id=13372, 
name=Thread-4288, state=RUNNABLE, group=TGRP-TestReplicationHandlerBackup]  
   at java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
java.net.Socket.connect(Socket.java:528) at 
sun.net.NetworkClient.doConnect(NetworkClient.java:180) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at 
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
 at java.net.URL.openStream(URL.java:1037) at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:314)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.handler.TestReplicationHandlerBackup: 
   1) Thread[id=13372, name=Thread-4288, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652)
at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
at java.net.URL.openStream(URL.java:1037)
at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:314)
at __randomizedtesting.SeedInfo.seed([CF5BC0D707786548]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandlerBackup

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=13372, name=Thread-4288, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup] at 
java.net.PlainSocketImpl.socketConnect(Native Method) at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
 at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
 at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)  
   at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at 
java.net.Socket.connect(Socket.java:579) at 
java.net.Socket.connect(Socket.java:528) at 
sun.net.NetworkClient.doConnect(NetworkClient.java:180) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at 
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at 
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:652) at 
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1323)
 at java.net.URL.openStream(URL.java:1037) at 
org.apache.solr.handler.TestReplicationHandlerBackup$BackupThread.run(TestReplicationHandlerBackup.java:314)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=13372, name=Thread-4288, state=RUNNABLE, 
group=TGRP-TestReplicationHandlerBackup]
at java.net.PlainSocketImpl.socketConnect(Native Method)
at 
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at 
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at 
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:579)
at java.net.Socket.connect(Socket.java:528)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180)
at 

Re: [JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 595 - Still Failing

2014-08-01 Thread Shalin Shekhar Mangar
I can reproduce this failure.


On Sat, Aug 2, 2014 at 7:34 AM, Apache Jenkins Server 
jenk...@builds.apache.org wrote:

 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/595/

 1 tests failed.
 REGRESSION:
  org.apache.lucene.codecs.idversion.TestIDVersionPostingsFormat.testRandom

 Error Message:
 String index out of range: -1

 Stack Trace:
 java.lang.StringIndexOutOfBoundsException: String index out of range: -1
 at
 __randomizedtesting.SeedInfo.seed([2E117A0408B6514F:5C5D5F0BB9D6E73C]:0)
 at java.lang.String.substring(String.java:1871)
 at
 org.apache.lucene.codecs.idversion.TestIDVersionPostingsFormat$4.next(TestIDVersionPostingsFormat.java:148)
 at
 org.apache.lucene.codecs.idversion.TestIDVersionPostingsFormat.testRandom(TestIDVersionPostingsFormat.java:283)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at
 org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at java.lang.Thread.run(Thread.java:745)




 Build Log:
 [...truncated 10211 lines...]
[junit4] Suite:
 org.apache.lucene.codecs.idversion.TestIDVersionPostingsFormat
[junit4]   2 NOTE: download the large Jenkins line-docs file by
 running 'ant get-jenkins-line-docs' in the lucene directory.
 

[JENKINS] Lucene-Solr-4.x-Windows (32bit/jdk1.7.0_65) - Build # 4133 - Failure!

2014-08-01 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Windows/4133/
Java: 32bit/jdk1.7.0_65 -client -XX:+UseSerialGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.MultiThreadedOCPTest.testDistribSearch

Error Message:
Task 3002 did not complete, final state: failed

Stack Trace:
java.lang.AssertionError: Task 3002 did not complete, final state: failed
at 
__randomizedtesting.SeedInfo.seed([A8F9350703B860C5:291FBB1F74E700F9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testDeduplicationOfSubmittedTasks(MultiThreadedOCPTest.java:163)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.doTest(MultiThreadedOCPTest.java:72)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:867)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at