[JENKINS-EA] Lucene-Solr-6.x-Linux (32bit/jdk-9-ea+119) - Build # 747 - Still Failing!

2016-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/747/
Java: 32bit/jdk-9-ea+119 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'CY val' for path 'response/params/y/c' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":0, "params":{"x":{ "a":"A val", "b":"B 
val", "":{"v":0},  from server:  http://127.0.0.1:38829/collection1

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val' for path 
'response/params/y/c' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":0,
"params":{"x":{
"a":"A val",
"b":"B val",
"":{"v":0},  from server:  http://127.0.0.1:38829/collection1
at 
__randomizedtesting.SeedInfo.seed([819FD2FB7BA86F:88D5A0085587C597]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:457)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:160)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:62)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build # 5870 - Failure!

2016-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5870/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([27682617D26FE6F9:D01BC84F1487491F]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1327)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11539 lines...]
   [junit4] Suite: 

[jira] [Created] (SOLR-9166) Export handler returns zero for fields numeric fields that are not in the original doc

2016-05-26 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-9166:


 Summary: Export handler returns zero for fields numeric fields 
that are not in the original doc
 Key: SOLR-9166
 URL: https://issues.apache.org/jira/browse/SOLR-9166
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson


>From the dev list discussion:

My original post.
Zero is different from not
existing. And let's claim that I want to process a stream and, say,
facet on in integer field over the result set. There's no way on the
client side to distinguish between a document that has a zero in the
field and one that didn't have the field in the first place so I'll
over-count the zero bucket.

>From Dennis Gove:
Is this true for non-numeric fields as well? I agree that this seems like a 
very bad thing.

I can't imagine that a fix would cause a problem with Streaming Expressions, 
ParallelSQL, or other given that the /select handler is not returning 0 for 
these missing fields (the /select handler is the default handler for the 
Streaming API so if nulls were a problem I imagine we'd have already seen it). 

That said, within Streaming Expressions there is a select(...) function which 
supports a replace(...) operation which allows you to replace one value (or 
null) with some other value. If a 0 were necessary one could use a select(...) 
to replace null with 0 using an expression like this 
   select(, replace(fieldA, null, withValue=0)). 
The end result of that would be that the field fieldA would never have a null 
value and for all tuples where a null value existed it would be replaced with 0.

Details on the select function can be found at 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61330338#StreamingExpressions-select.


And to answer Denis' question, null gets returned for string DocValues fields.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Export handler returns zero for missing fields.

2016-05-26 Thread Dennis Gove
Is this true for non-numeric fields as well? I agree that this seems like a
very bad thing.

I can't imagine that a fix would cause a problem with Streaming
Expressions, ParallelSQL, or other given that the /select handler is not
returning 0 for these missing fields (the /select handler is the default
handler for the Streaming API so if nulls were a problem I imagine we'd
have already seen it).

That said, within Streaming Expressions there is a select(...) function
which supports a replace(...) operation which allows you to replace one
value (or null) with some other value. If a 0 were necessary one could use
a select(...) to replace null with 0 using an expression like this
   select(, replace(fieldA, null, withValue=0)).
The end result of that would be that the field fieldA would never have a
null value and for all tuples where a null value existed it would be
replaced with 0.

Details on the select function can be found at
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61330338#StreamingExpressions-select
.

- Dennis

On Thu, May 26, 2016 at 11:35 PM, Erick Erickson 
wrote:

> This seems to me to be A Bad Thing. Zero is different from not
> existing. And let's claim that I want to process a stream and, say,
> facet on in integer field over the result set. There's no way on the
> client side to distinguish between a document that has a zero in the
> field and one that didn't have the field in the first place so I'll
> over-count the zero bucket.
>
> So before I raise a JIRA, my question is whether this is expected
> behavior or not? I've found a mechanism that _shouldn't_ be very
> expensive to omit the field if it doesn't exist in the returned
> tuples.
>
> Now, how badly this would break Streaming Expressions, ParallelSQL and
> the like I haven't looked into yet.
>
> So before I work up a trial patch am I going off in the weeds?
>
> Best,
> Erick
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS-EA] Lucene-Solr-6.0-Linux (64bit/jdk-9-ea+119) - Build # 190 - Failure!

2016-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.0-Linux/190/
Java: 64bit/jdk-9-ea+119 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestReqParamsAPI.test

Error Message:
Could not get expected value  'CY val' for path 'response/params/y/c' full 
output: {   "responseHeader":{ "status":0, "QTime":0},   "response":{   
  "znodeVersion":0, "params":{"x":{ "a":"A val", "b":"B 
val", "":{"v":0}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val' for path 
'response/params/y/c' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "response":{
"znodeVersion":0,
"params":{"x":{
"a":"A val",
"b":"B val",
"":{"v":0}
at 
__randomizedtesting.SeedInfo.seed([7DF3D77133611EAA:F5A7E8AB9D9D7352]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:458)
at 
org.apache.solr.handler.TestReqParamsAPI.testReqParams(TestReqParamsAPI.java:165)
at 
org.apache.solr.handler.TestReqParamsAPI.test(TestReqParamsAPI.java:67)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(java.base@9-ea/Native 
Method)
at 
jdk.internal.reflect.NativeMethodAccessorImpl.invoke(java.base@9-ea/NativeMethodAccessorImpl.java:62)
at 
jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(java.base@9-ea/DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(java.base@9-ea/Method.java:531)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Export handler returns zero for missing fields.

2016-05-26 Thread Erick Erickson
This seems to me to be A Bad Thing. Zero is different from not
existing. And let's claim that I want to process a stream and, say,
facet on in integer field over the result set. There's no way on the
client side to distinguish between a document that has a zero in the
field and one that didn't have the field in the first place so I'll
over-count the zero bucket.

So before I raise a JIRA, my question is whether this is expected
behavior or not? I've found a mechanism that _shouldn't_ be very
expensive to omit the field if it doesn't exist in the returned
tuples.

Now, how badly this would break Streaming Expressions, ParallelSQL and
the like I haven't looked into yet.

So before I work up a trial patch am I going off in the weeds?

Best,
Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-05-26 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15303367#comment-15303367
 ] 

Hoss Man commented on SOLR-5944:



Ok ... more in depth comments reviewing the latest patch (ignoring some of the 
general higher level stuff i've previously commented on).

(So far i've still focused on reviewing the tests, because we should make sure 
they're rock solid before any discussion of refacoting/improving/changing the 
code)



* in general, all these tests seem to depend on autoCommit being disabled, and 
use a config that is setup that way, but don't actaully assert that it's true 
in case someone changes the configs in the future
** TestInPlaceUpdate can get direct access to the SolrCore verify that for 
certain to
** the distrib tests might be able to use one of hte new cnfig APIs to check 
this (i don't know off the top of my head)
*** at a minimum define a String constant for the config file name in 
TestInPlaceUpdate and refer to it in the other tests where the same config is 
expected with a comment explaining that we're *assuming* it has autoCommit 
disabled and that TestInPlaceUpdate will fail if it does not.

* TestInPlaceUpdate
** SuppressCodecs should be removed
** should at least have class level javadocs explaining what's being tested
** testUpdatingDocValues
*** for addAndGetVersion calls where we don't care about the returned version, 
don't bother assigning it to a variable (distracting)
*** for addAndGetVersion calls where we do care about the returned version, we 
need check it for every update to that doc...
 currently version1 is compared to newVersion1 to assert that an update 
incrememnted the version, but in between those 2 updates are 4 other places 
where that document was updated -- we have to assert it has the expected value 
(either the same as before, or new - and if new record it) after all of those 
addAndGetVersion calls, or we can't be sure where/why/how a bug exists if that 
existing comparison fails.
 ideally we should be asserting the version of every doc when we query it 
right along side the assertion for it's updated "ratings" value
*** most of the use of "field(ratings)" can probbaly just be replaced with 
"ratings" now that DV are returnable -- although it's nice to have both 
included in the test at least once to demo that both work, but when doing that 
there should be a comment making it clear why
** testOnlyPartialUpdatesBetweenCommits
*** ditto comment about checking return value from addAndGetVersion
*** this also seems like a good place to to test if doing a redundent atomic 
update (either via set to the same value or via inc=0) returns a new version or 
not -- should it?
** DocInfo should be a private static class and have some javadocs
** based on how testing has gone so far, and the discover of LUCENE-7301 it 
seems clear that adding even single thread, single node, randomized testing of 
lots of diff types of add/update calls would be good to add
*** we could refactor/improve the "checkReplay" function I added in the last 
patch to do more testing of a randomly generated Iterable of "commands" 
(commit, doc, doc+atomic mutation, etc...)
*** and of course: improve checkReplay to verify RTG against hte uncommited 
model as well
*** testReplayFromFile and getSDdoc should probably go away once we have more 
structured tests for doing this
** createMap can be elimianted -- callers can just use SolrTestCaseJ4.map(...)
** In general the tests in this class should include more queries / sorting 
against the updated docvalues field after commits to ensure that the updated 
value is searchable & sortable
** Likewise the test methods in this class should probably have a lot more RTG 
checks -- with filter queries that constrain against the updated docvalues 
field, and checks of the expected version field -- to ensure that is all 
working properly.

* InPlaceUpdateDistribTest
** SuppressCodecs should be removed
** should at least have class level javadocs explaining what's being tested
** Once LUCENE-7301 is fixed and we can demonstate that this passes reliably 
all of the time, we should ideally refactor this to subclass SolrCloudTestCase
** in general, the "pick a random client" logic should be refactored so that 
sometimes it randomly picks a CloudSolrClient
** there should almost certianly be some "delete all docs and optimize" cleanup 
in between all of these tests
*** easy to do in an @Before method if we refactor to subclass SolrCloudTestCase
** docValuesUpdateTest
*** should randomize numdocs
*** we need to find away to eliminate the hardcoded "Thread.sleep(500);" 
calls...
 if initially no docs have a rating value, then make the (first) test query 
be for {{rating:\[\* TO \*\]}} and execute it in a rety loop until the numFound 
matches numDocs.
 likewise if we ensure all ratings have a value such that abs(ratings) < X, 
then the second 

[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 158 - Still Failing!

2016-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/158/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.search.MergeStrategyTest.test

Error Message:
Error from server at http://127.0.0.1:35104/me/collection1: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:35104/me/collection1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:35104/me/collection1: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:35104/me/collection1
at 
__randomizedtesting.SeedInfo.seed([6F7D2A663A3722B2:E72915BC94CB4F4A]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:149)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:564)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:612)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:594)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:573)
at 
org.apache.solr.search.MergeStrategyTest.test(MergeStrategyTest.java:79)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

What's the optimal ways to measure Lucene query cost?

2016-05-26 Thread Thomas Pan
I am curious as how to measure Lucene query cost. Shall I use query latency
or shall I dig into deeper as how many postings are touched and how many
fields are returned, etc.?


Best,
Thomas

--
The journey of a thousand miles begins with one step. -- Lao Tzu
Do not go where the path may lead, go instead where there is no path and
leave a trail. -- Ralph Waldo Emerson


[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 607 - Failure!

2016-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/607/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.AsyncMigrateRouteKeyTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.AsyncMigrateRouteKeyTest: 1) Thread[id=46553, 
name=OverseerHdfsCoreFailoverThread-95964982335701000-127.0.0.1:36528_-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.sleep(Native Method) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.AsyncMigrateRouteKeyTest: 
   1) Thread[id=46553, 
name=OverseerHdfsCoreFailoverThread-95964982335701000-127.0.0.1:36528_-n_01,
 state=TIMED_WAITING, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:137)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([78E241194B549656]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.AsyncMigrateRouteKeyTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=46553, 
name=OverseerHdfsCoreFailoverThread-95964982335701000-127.0.0.1:36528_-n_01,
 state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.] at 
java.lang.Thread.interrupt0(Native Method) at 
java.lang.Thread.interrupt(Thread.java:923) at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
 at java.lang.Thread.run(Thread.java:745)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=46553, 
name=OverseerHdfsCoreFailoverThread-95964982335701000-127.0.0.1:36528_-n_01,
 state=RUNNABLE, group=Overseer Hdfs SolrCore Failover Thread.]
at java.lang.Thread.interrupt0(Native Method)
at java.lang.Thread.interrupt(Thread.java:923)
at 
org.apache.solr.cloud.OverseerAutoReplicaFailoverThread.run(OverseerAutoReplicaFailoverThread.java:139)
at java.lang.Thread.run(Thread.java:745)
at __randomizedtesting.SeedInfo.seed([78E241194B549656]:0)




Build Log:
[...truncated 11875 lines...]
   [junit4] Suite: org.apache.solr.cloud.AsyncMigrateRouteKeyTest
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J1/temp/solr.cloud.AsyncMigrateRouteKeyTest_78E241194B549656-001/init-core-data-001
   [junit4]   2> 2567603 INFO  
(SUITE-AsyncMigrateRouteKeyTest-seed#[78E241194B549656]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 2567605 INFO  
(TEST-AsyncMigrateRouteKeyTest.test-seed#[78E241194B549656]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2567605 INFO  (Thread-6225) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2567605 INFO  (Thread-6225) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2567712 INFO  
(TEST-AsyncMigrateRouteKeyTest.test-seed#[78E241194B549656]) [] 
o.a.s.c.ZkTestServer start zk server on port:53788
   [junit4]   2> 2567712 INFO  
(TEST-AsyncMigrateRouteKeyTest.test-seed#[78E241194B549656]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2567713 INFO  
(TEST-AsyncMigrateRouteKeyTest.test-seed#[78E241194B549656]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   [junit4]   2> 2567716 INFO  (zkCallback-10520-thread-1) [] 
o.a.s.c.c.ConnectionManager Watcher 
org.apache.solr.common.cloud.ConnectionManager@7f108988 
name:ZooKeeperConnection Watcher:127.0.0.1:53788 got event WatchedEvent 
state:SyncConnected type:None path:null path:null type:None
   [junit4]   2> 2567716 INFO  
(TEST-AsyncMigrateRouteKeyTest.test-seed#[78E241194B549656]) [] 
o.a.s.c.c.ConnectionManager Client is connected to ZooKeeper
   [junit4]   2> 2567716 INFO  
(TEST-AsyncMigrateRouteKeyTest.test-seed#[78E241194B549656]) [] 
o.a.s.c.c.SolrZkClient Using default ZkACLProvider
   [junit4]   2> 2567717 INFO  
(TEST-AsyncMigrateRouteKeyTest.test-seed#[78E241194B549656]) [] 
o.a.s.c.c.SolrZkClient makePath: /solr
   [junit4]   2> 2567720 INFO  
(TEST-AsyncMigrateRouteKeyTest.test-seed#[78E241194B549656]) [] 
o.a.s.c.c.SolrZkClient Using default ZkCredentialsProvider
   [junit4]   2> 2567721 INFO  
(TEST-AsyncMigrateRouteKeyTest.test-seed#[78E241194B549656]) [] 
o.a.s.c.c.ConnectionManager Waiting for client to connect to ZooKeeper
   

[JENKINS] Lucene-Solr-6.x-Linux (64bit/jdk1.8.0_92) - Build # 746 - Failure!

2016-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/746/
Java: 64bit/jdk1.8.0_92 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.TestReplicationHandler

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [NRTCachingDirectory]

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [NRTCachingDirectory]
at __randomizedtesting.SeedInfo.seed([18324C37DAD052B8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:256)
at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:834)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11329 lines...]
   [junit4] Suite: org.apache.solr.handler.TestReplicationHandler
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_18324C37DAD052B8-001/init-core-data-001
   [junit4]   2> 875038 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[18324C37DAD052B8]) [] 
o.a.s.SolrTestCaseJ4 ###Starting doTestRepeater
   [junit4]   2> 875039 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[18324C37DAD052B8]) [] 
o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_18324C37DAD052B8-001/solr-instance-001/collection1
   [junit4]   2> 875044 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[18324C37DAD052B8]) [] 
o.e.j.s.Server jetty-9.3.8.v20160314
   [junit4]   2> 875045 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[18324C37DAD052B8]) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@77a30324{/solr,null,AVAILABLE}
   [junit4]   2> 875045 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[18324C37DAD052B8]) [] 
o.e.j.s.ServerConnector Started 
ServerConnector@365acf94{HTTP/1.1,[http/1.1]}{127.0.0.1:43927}
   [junit4]   2> 875045 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[18324C37DAD052B8]) [] 
o.e.j.s.Server Started @876715ms
   [junit4]   2> 875045 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[18324C37DAD052B8]) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: 
{solr.data.dir=/home/jenkins/workspace/Lucene-Solr-6.x-Linux/solr/build/solr-core/test/J0/temp/solr.handler.TestReplicationHandler_18324C37DAD052B8-001/solr-instance-001/collection1/data,
 hostContext=/solr, hostPort=43927}
   [junit4]   2> 875046 INFO  
(TEST-TestReplicationHandler.doTestRepeater-seed#[18324C37DAD052B8]) [] 

[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 165 - Failure!

2016-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/165/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:61232/kzduc/zz","node_name":"127.0.0.1:61232_kzduc%2Fzz","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={   "replicationFactor":"3",   
"shards":{"shard1":{   "range":"8000-7fff",   "state":"active", 
  "replicas":{ "core_node1":{   
"core":"c8n_1x3_lf_shard1_replica2",   
"base_url":"http://127.0.0.1:61226/kzduc/zz;,   
"node_name":"127.0.0.1:61226_kzduc%2Fzz",   "state":"down"}, 
"core_node2":{   "state":"down",   
"base_url":"http://127.0.0.1:61242/kzduc/zz;,   
"core":"c8n_1x3_lf_shard1_replica3",   
"node_name":"127.0.0.1:61242_kzduc%2Fzz"}, "core_node3":{   
"core":"c8n_1x3_lf_shard1_replica1",   
"base_url":"http://127.0.0.1:61232/kzduc/zz;,   
"node_name":"127.0.0.1:61232_kzduc%2Fzz",   "state":"active",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"1",   "autoAddReplicas":"false"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node3:{"core":"c8n_1x3_lf_shard1_replica1","base_url":"http://127.0.0.1:61232/kzduc/zz","node_name":"127.0.0.1:61232_kzduc%2Fzz","state":"active","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf)={
  "replicationFactor":"3",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node1":{
  "core":"c8n_1x3_lf_shard1_replica2",
  "base_url":"http://127.0.0.1:61226/kzduc/zz;,
  "node_name":"127.0.0.1:61226_kzduc%2Fzz",
  "state":"down"},
"core_node2":{
  "state":"down",
  "base_url":"http://127.0.0.1:61242/kzduc/zz;,
  "core":"c8n_1x3_lf_shard1_replica3",
  "node_name":"127.0.0.1:61242_kzduc%2Fzz"},
"core_node3":{
  "core":"c8n_1x3_lf_shard1_replica1",
  "base_url":"http://127.0.0.1:61232/kzduc/zz;,
  "node_name":"127.0.0.1:61232_kzduc%2Fzz",
  "state":"active",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false"}
at 
__randomizedtesting.SeedInfo.seed([2FCC685D8346923B:A79857872DBAFFC3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:168)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:55)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3297 - Failure!

2016-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3297/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestSizeLimitedDistributedMap.testCleanup

Error Message:
KeeperErrorCode = NoNode for /overseer/collection-map-completed/mn-xyz_937

Stack Trace:
org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode 
for /overseer/collection-map-completed/mn-xyz_937
at 
__randomizedtesting.SeedInfo.seed([28743502562FB5EA:785E66AC2F82255F]:0)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873)
at 
org.apache.solr.common.cloud.SolrZkClient$2.execute(SolrZkClient.java:244)
at 
org.apache.solr.common.cloud.SolrZkClient$2.execute(SolrZkClient.java:241)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at 
org.apache.solr.common.cloud.SolrZkClient.delete(SolrZkClient.java:241)
at 
org.apache.solr.cloud.SizeLimitedDistributedMap.put(SizeLimitedDistributedMap.java:69)
at 
org.apache.solr.cloud.TestSizeLimitedDistributedMap.testCleanup(TestSizeLimitedDistributedMap.java:44)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 1167 - Failure

2016-05-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/1167/

No tests ran.

Build Log:
[...truncated 52 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Command "git -c core.askpass=true 
fetch --tags --progress git://git.apache.org/lucene-solr.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: read error: Connection reset by peer

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1693)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1441)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:62)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:313)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:120)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to lucene(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:220)
at hudson.remoting.Channel.call(Channel.java:781)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
at sun.reflect.GeneratedMethodAccessor626.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
at com.sun.proxy.$Proxy57.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:764)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://git.apache.org/lucene-solr.git +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at 

[JENKINS-MAVEN] Lucene-Solr-Maven-master #1765: POMs out of sync

2016-05-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/1765/

No tests ran.

Build Log:
[...truncated 15 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Command "git -c core.askpass=true 
fetch --tags --progress git://git.apache.org/lucene-solr.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: read error: Connection reset by peer

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1693)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1441)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:62)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:313)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:120)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to lucene(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:220)
at hudson.remoting.Channel.call(Channel.java:781)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
at sun.reflect.GeneratedMethodAccessor626.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
at com.sun.proxy.$Proxy57.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:764)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://git.apache.org/lucene-solr.git +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at 

[JENKINS] Lucene-Artifacts-6.0 - Build # 17 - Failure

2016-05-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Artifacts-6.0/17/

No tests ran.

Build Log:
[...truncated 19 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Command "git -c core.askpass=true 
fetch --tags --progress git://git.apache.org/lucene-solr.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: read error: Connection reset by peer

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1693)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1441)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:62)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:313)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:120)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to lucene(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:220)
at hudson.remoting.Channel.call(Channel.java:781)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
at sun.reflect.GeneratedMethodAccessor626.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
at com.sun.proxy.$Proxy57.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:764)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://git.apache.org/lucene-solr.git +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at 

[JENKINS-MAVEN] Lucene-Solr-Maven-6.0 #19: POMs out of sync

2016-05-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-6.0/19/

No tests ran.

Build Log:
[...truncated 15 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Command "git -c core.askpass=true 
fetch --tags --progress git://git.apache.org/lucene-solr.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: read error: Connection reset by peer

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1693)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1441)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:62)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:313)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:120)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to lucene(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:220)
at hudson.remoting.Channel.call(Channel.java:781)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
at sun.reflect.GeneratedMethodAccessor626.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
at com.sun.proxy.$Proxy57.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:764)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://git.apache.org/lucene-solr.git +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at 

[JENKINS] Solr-Artifacts-6.x - Build # 69 - Failure

2016-05-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Solr-Artifacts-6.x/69/

No tests ran.

Build Log:
[...truncated 18 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: Command "git -c core.askpass=true 
fetch --tags --progress git://git.apache.org/lucene-solr.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: fatal: read error: Connection reset by peer

at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1693)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1441)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:62)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:313)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:152)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler$1.call(RemoteGitImpl.java:145)
at hudson.remoting.UserRequest.perform(UserRequest.java:120)
at hudson.remoting.UserRequest.perform(UserRequest.java:48)
at hudson.remoting.Request$2.run(Request.java:326)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:68)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
at ..remote call to lucene(Native Method)
at hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1416)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:220)
at hudson.remoting.Channel.call(Channel.java:781)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.execute(RemoteGitImpl.java:145)
at sun.reflect.GeneratedMethodAccessor626.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl$CommandInvocationHandler.invoke(RemoteGitImpl.java:131)
at com.sun.proxy.$Proxy57.execute(Unknown Source)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:764)
... 11 more
ERROR: null
Retrying after 10 seconds
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url git://git.apache.org/lucene-solr.git # 
 > timeout=10
Cleaning workspace
 > git rev-parse --verify HEAD # timeout=10
Resetting working tree
 > git reset --hard # timeout=10
 > git clean -fdx # timeout=10
Fetching upstream changes from git://git.apache.org/lucene-solr.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > git://git.apache.org/lucene-solr.git +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:766)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1022)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1053)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1276)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:607)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1738)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at 

[JENKINS] Lucene-Solr-NightlyTests-6.0 - Build # 20 - Still Failing

2016-05-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.0/20/

4 tests failed.
FAILED:  org.apache.solr.cloud.CollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=116272, name=collection0, 
state=RUNNABLE, group=TGRP-CollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=116272, name=collection0, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:46875: collection already exists: 
awholynewstresscollection_collection0_3
at __randomizedtesting.SeedInfo.seed([33FD9B2EC1BC674F]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:577)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:241)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:230)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:372)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:325)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1100)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:870)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:806)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1595)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1616)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:990)


FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not find collection:.system

Stack Trace:
java.lang.AssertionError: Could not find collection:.system
at 
__randomizedtesting.SeedInfo.seed([33FD9B2EC1BC674F:EBB0B6793661C2EF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:151)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:130)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:852)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:116)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 

[jira] [Resolved] (SOLR-9165) Problems with the spellcheck component running search with cursor

2016-05-26 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer resolved SOLR-9165.
--
Resolution: Fixed

Yamileydis, thank you for reporting this.

> Problems with the spellcheck component  running search with cursor
> --
>
> Key: SOLR-9165
> URL: https://issues.apache.org/jira/browse/SOLR-9165
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 5.2
>Reporter: Yamileydis Veranes
>Assignee: James Dyer
> Attachments: SOLR-9165.patch, SOLR-9165.patch
>
>
> I'm having some problems with the spellcheck component, specifically, running 
> a search with cursors.  
> When I run the following query:
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc
> the following collations are returned
> 
> 
> incendio
> 485
> 
> incendio
> 
> 
> 
> Instead, when I try to run the same query but this time using cursors
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc=*
> no collations are returned
> false
> and the server trace the following exception message.
> WARN  - 2016-05-26 14:14:58.472; [docs shard2 core_node4 
> docs_shard2_replica1] org.apache.solr.spelling.SpellCheckCollator; Exception 
> trying to re-query to check if a spell check possibility would return any 
> hits.
> org.apache.solr.common.SolrException: Cursor functionality requires a sort 
> containing a uniqueKey field tie breaker
>   at org.apache.solr.search.CursorMark.(CursorMark.java:93)
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:189)
>   at 
> org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:141)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:237)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:202)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:255)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipseInstead.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-9165) Problems with the spellcheck component running search with cursor

2016-05-26 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer reassigned SOLR-9165:


Assignee: James Dyer

> Problems with the spellcheck component  running search with cursor
> --
>
> Key: SOLR-9165
> URL: https://issues.apache.org/jira/browse/SOLR-9165
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 5.2
>Reporter: Yamileydis Veranes
>Assignee: James Dyer
> Attachments: SOLR-9165.patch, SOLR-9165.patch
>
>
> I'm having some problems with the spellcheck component, specifically, running 
> a search with cursors.  
> When I run the following query:
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc
> the following collations are returned
> 
> 
> incendio
> 485
> 
> incendio
> 
> 
> 
> Instead, when I try to run the same query but this time using cursors
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc=*
> no collations are returned
> false
> and the server trace the following exception message.
> WARN  - 2016-05-26 14:14:58.472; [docs shard2 core_node4 
> docs_shard2_replica1] org.apache.solr.spelling.SpellCheckCollator; Exception 
> trying to re-query to check if a spell check possibility would return any 
> hits.
> org.apache.solr.common.SolrException: Cursor functionality requires a sort 
> containing a uniqueKey field tie breaker
>   at org.apache.solr.search.CursorMark.(CursorMark.java:93)
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:189)
>   at 
> org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:141)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:237)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:202)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:255)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipseInstead.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9165) Problems with the spellcheck component running search with cursor

2016-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302811#comment-15302811
 ] 

ASF subversion and git services commented on SOLR-9165:
---

Commit f1f85e560f54371800a368aff801b7c24413ece6 in lucene-solr's branch 
refs/heads/branch_6x from jdyer1
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f1f85e5 ]

SOLR-9165: disable "cursorMark" when testing for valid SpellCheck Collations


> Problems with the spellcheck component  running search with cursor
> --
>
> Key: SOLR-9165
> URL: https://issues.apache.org/jira/browse/SOLR-9165
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 5.2
>Reporter: Yamileydis Veranes
> Attachments: SOLR-9165.patch, SOLR-9165.patch
>
>
> I'm having some problems with the spellcheck component, specifically, running 
> a search with cursors.  
> When I run the following query:
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc
> the following collations are returned
> 
> 
> incendio
> 485
> 
> incendio
> 
> 
> 
> Instead, when I try to run the same query but this time using cursors
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc=*
> no collations are returned
> false
> and the server trace the following exception message.
> WARN  - 2016-05-26 14:14:58.472; [docs shard2 core_node4 
> docs_shard2_replica1] org.apache.solr.spelling.SpellCheckCollator; Exception 
> trying to re-query to check if a spell check possibility would return any 
> hits.
> org.apache.solr.common.SolrException: Cursor functionality requires a sort 
> containing a uniqueKey field tie breaker
>   at org.apache.solr.search.CursorMark.(CursorMark.java:93)
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:189)
>   at 
> org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:141)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:237)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:202)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:255)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipseInstead.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)



--
This 

[jira] [Commented] (SOLR-9165) Problems with the spellcheck component running search with cursor

2016-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302807#comment-15302807
 ] 

ASF subversion and git services commented on SOLR-9165:
---

Commit 164128f977720acc408e88b595f8621bf9760b45 in lucene-solr's branch 
refs/heads/master from jdyer1
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=164128f ]

SOLR-9165: disable "cursorMark" when testing for valid SpellCheck Collations


> Problems with the spellcheck component  running search with cursor
> --
>
> Key: SOLR-9165
> URL: https://issues.apache.org/jira/browse/SOLR-9165
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 5.2
>Reporter: Yamileydis Veranes
> Attachments: SOLR-9165.patch, SOLR-9165.patch
>
>
> I'm having some problems with the spellcheck component, specifically, running 
> a search with cursors.  
> When I run the following query:
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc
> the following collations are returned
> 
> 
> incendio
> 485
> 
> incendio
> 
> 
> 
> Instead, when I try to run the same query but this time using cursors
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc=*
> no collations are returned
> false
> and the server trace the following exception message.
> WARN  - 2016-05-26 14:14:58.472; [docs shard2 core_node4 
> docs_shard2_replica1] org.apache.solr.spelling.SpellCheckCollator; Exception 
> trying to re-query to check if a spell check possibility would return any 
> hits.
> org.apache.solr.common.SolrException: Cursor functionality requires a sort 
> containing a uniqueKey field tie breaker
>   at org.apache.solr.search.CursorMark.(CursorMark.java:93)
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:189)
>   at 
> org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:141)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:237)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:202)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:255)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipseInstead.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)



--
This 

[jira] [Updated] (SOLR-9165) Problems with the spellcheck component running search with cursor

2016-05-26 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-9165:
-
Attachment: SOLR-9165.patch

Here's a straightforward fix:  don't request the cursorMark when testing the 
index for valid collations. [^SOLR-9165.patch].

I'll commit to master and branch_6x shortly.

> Problems with the spellcheck component  running search with cursor
> --
>
> Key: SOLR-9165
> URL: https://issues.apache.org/jira/browse/SOLR-9165
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 5.2
>Reporter: Yamileydis Veranes
> Attachments: SOLR-9165.patch, SOLR-9165.patch
>
>
> I'm having some problems with the spellcheck component, specifically, running 
> a search with cursors.  
> When I run the following query:
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc
> the following collations are returned
> 
> 
> incendio
> 485
> 
> incendio
> 
> 
> 
> Instead, when I try to run the same query but this time using cursors
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc=*
> no collations are returned
> false
> and the server trace the following exception message.
> WARN  - 2016-05-26 14:14:58.472; [docs shard2 core_node4 
> docs_shard2_replica1] org.apache.solr.spelling.SpellCheckCollator; Exception 
> trying to re-query to check if a spell check possibility would return any 
> hits.
> org.apache.solr.common.SolrException: Cursor functionality requires a sort 
> containing a uniqueKey field tie breaker
>   at org.apache.solr.search.CursorMark.(CursorMark.java:93)
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:189)
>   at 
> org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:141)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:237)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:202)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:255)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipseInstead.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, 

[jira] [Updated] (LUCENE-7302) IndexWriter should tell you the order of indexing operations

2016-05-26 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-7302:
---
Attachment: LUCENE-7032.patch

Here's the applyable patch vs current master from the branch... I think it's 
close, but I need to improve javadocs.

> IndexWriter should tell you the order of indexing operations
> 
>
> Key: LUCENE-7302
> URL: https://issues.apache.org/jira/browse/LUCENE-7302
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7032.patch
>
>
> Today, when you use multiple threads to concurrently index, Lucene
> knows the effective order that those operations were applied to the
> index, but doesn't return that information back to you.
> But this is important to know, if you want to build a reliable search
> API on top of Lucene.  Combined with the recently added NRT
> replication (LUCENE-5438) it can be a strong basis for an efficient
> distributed search API.
> I think we should return this information, since we already have it,
> and since it could simplify servers (ES/Solr) on top of Lucene:
>   - They would not require locking preventing the same id from being
> indexed concurrently since they could instead check the returned
> sequence number to know which update "won", for features like
> "realtime get".  (Locking is probably still needed for features
> like optimistic concurrency).
>   - When re-applying operations from a prior commit point, e.g. on
> recovering after a crash from a transaction log, they can know
> exactly which operations made it into the commit and which did
> not, and replay only the truly missing operations.
> Not returning this just hurts people who try to build servers on top
> with clear semantics on crashing/recovering ... I also struggled with
> this when building a simple "server wrapper" on top of Lucene
> (LUCENE-5376).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9165) Problems with the spellcheck component running search with cursor

2016-05-26 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-9165:
-
Attachment: SOLR-9165.patch

Here's a failing unit test for this one: [^SOLR-9165.patch].

> Problems with the spellcheck component  running search with cursor
> --
>
> Key: SOLR-9165
> URL: https://issues.apache.org/jira/browse/SOLR-9165
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 5.2
>Reporter: Yamileydis Veranes
> Attachments: SOLR-9165.patch
>
>
> I'm having some problems with the spellcheck component, specifically, running 
> a search with cursors.  
> When I run the following query:
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc
> the following collations are returned
> 
> 
> incendio
> 485
> 
> incendio
> 
> 
> 
> Instead, when I try to run the same query but this time using cursors
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc=*
> no collations are returned
> false
> and the server trace the following exception message.
> WARN  - 2016-05-26 14:14:58.472; [docs shard2 core_node4 
> docs_shard2_replica1] org.apache.solr.spelling.SpellCheckCollator; Exception 
> trying to re-query to check if a spell check possibility would return any 
> hits.
> org.apache.solr.common.SolrException: Cursor functionality requires a sort 
> containing a uniqueKey field tie breaker
>   at org.apache.solr.search.CursorMark.(CursorMark.java:93)
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:189)
>   at 
> org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:141)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:237)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:202)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:255)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipseInstead.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7302) IndexWriter should tell you the order of indexing operations

2016-05-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7302?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302757#comment-15302757
 ] 

Michael McCandless commented on LUCENE-7302:


I've been pushing changes to this branch:

  https://github.com/mikemccand/lucene-solr/tree/sequence_numbers

I think it's close ... I've resolved all nocommits, and created some
fun tests with threads updating the same doc at once, doing concurrent
commits, and verifying what the sequence numbers claim turns out to be
true.

The changes are relatively minor: IW already "knows" the order that
operations were applied, but these methods return {{void}} today and
this changes them to return {{long}} instead.  Callers who don't
care can just ignore the returned long.

It also lets us remove the wrapper class {{TrackingIndexWriter}} which
was doing basically the same thing (returning a long for each op) but
with weaker guarantees.

These sequence numbers are fleeting, not saved into commit points,
etc., and only useful within one IW instance (they reset back to 1 on
the next IW instance).

I'll build an applyable patch and post here ...

> IndexWriter should tell you the order of indexing operations
> 
>
> Key: LUCENE-7302
> URL: https://issues.apache.org/jira/browse/LUCENE-7302
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
>Assignee: Michael McCandless
> Fix For: 6.1, master (7.0)
>
>
> Today, when you use multiple threads to concurrently index, Lucene
> knows the effective order that those operations were applied to the
> index, but doesn't return that information back to you.
> But this is important to know, if you want to build a reliable search
> API on top of Lucene.  Combined with the recently added NRT
> replication (LUCENE-5438) it can be a strong basis for an efficient
> distributed search API.
> I think we should return this information, since we already have it,
> and since it could simplify servers (ES/Solr) on top of Lucene:
>   - They would not require locking preventing the same id from being
> indexed concurrently since they could instead check the returned
> sequence number to know which update "won", for features like
> "realtime get".  (Locking is probably still needed for features
> like optimistic concurrency).
>   - When re-applying operations from a prior commit point, e.g. on
> recovering after a crash from a transaction log, they can know
> exactly which operations made it into the commit and which did
> not, and replay only the truly missing operations.
> Not returning this just hurts people who try to build servers on top
> with clear semantics on crashing/recovering ... I also struggled with
> this when building a simple "server wrapper" on top of Lucene
> (LUCENE-5376).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7278) Make template Calendar configurable in DateRangePrefixTree

2016-05-26 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302748#comment-15302748
 ] 

Dawid Weiss commented on LUCENE-7278:
-

Indeed, use {{argumentFormatting}}. You can even use two arguments to the 
constructor and use only the first one to create the description -- then you 
can format it any way you like inside the {{parameters}} factory.

{code}
  @ParametersFactory(argumentFormatting = "%s")
   public static Iterable parameters() {
 return Arrays.asList(new Object[][]{
 {"default", DateRangePrefixTree.DEFAULT_CAL},
 {"compat", DateRangePrefixTree.JAVA_UTIL_TIME_COMPAT_CAL}
{code}

> Make template Calendar configurable in DateRangePrefixTree
> --
>
> Key: LUCENE-7278
> URL: https://issues.apache.org/jira/browse/LUCENE-7278
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1
>
> Attachments: LUCENE_7278.patch, LUCENE_7278.patch
>
>
> DateRangePrefixTree (a SpatialPrefixTree designed for dates and date ranges) 
> currently uses a hard-coded Calendar template for making new instances.  This 
> ought to be configurable so that, for example, the Gregorian change date can 
> be configured.  This is particularly important for compatibility with Java 
> 8's java.time API which uses the Gregorian calendar for all time (there is no 
> use of Julian prior to 1582).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7304) Doc values based block join implementation

2016-05-26 Thread Martijn van Groningen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302697#comment-15302697
 ] 

Martijn van Groningen commented on LUCENE-7304:
---

bq. If we switched block joins to use numeric doc values, I am wondering if we 
would ever need to read doc values in reverse order? 

Yes, in this patch, but I think the logic can be changed, so that at least doc 
values don't need to be read in reverse. Currently there is one offset field 
holding both the offset the parent for child docs and offset to the first child 
for parents. This can be split up in two fields, so that doc values never has 
to be read in reverse.

> Doc values based block join implementation
> --
>
> Key: LUCENE-7304
> URL: https://issues.apache.org/jira/browse/LUCENE-7304
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Martijn van Groningen
>Priority: Minor
> Attachments: LUCENE_7304.patch
>
>
> At query time the block join relies on a bitset for finding the previous 
> parent doc during advancing the doc id iterator. On large indices these 
> bitsets can consume large amounts of jvm heap space.  Also typically due the 
> nature how these bitsets are set, the 'FixedBitSet' implementation is used.
> The idea I had was to replace the bitset usage by a numeric doc values field 
> that stores offsets. Each child doc stores how many docids it is from its 
> parent doc and each parent stores how many docids it is apart from its first 
> child. At query time this information can be used to perform the block join.
> I think another benefit of this approach is that external tools can now 
> easily determine if a doc is part of a block of documents and perhaps this 
> also helps index time sorting?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-9165) Problems with the spellcheck component running a search with cursor

2016-05-26 Thread Yamileydis Veranes (JIRA)
Yamileydis Veranes created SOLR-9165:


 Summary: Problems with the spellcheck component  running a search 
with cursor
 Key: SOLR-9165
 URL: https://issues.apache.org/jira/browse/SOLR-9165
 Project: Solr
  Issue Type: Bug
  Components: spellchecker
Affects Versions: 5.2
Reporter: Yamileydis Veranes


I'm having some problems with the spellcheck component, specifically, running a 
search with cursors.  

When I run the following query:

http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
 desc,id asc

the following collations are returned


incendio
485

incendio





Instead, when I try to run the same query but this time using cursors

http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
 desc,id asc=*

no collations are returned

false


and the server trace the following exception message.


WARN  - 2016-05-26 14:14:58.472; [docs shard2 core_node4 docs_shard2_replica1] 
org.apache.solr.spelling.SpellCheckCollator; Exception trying to re-query to 
check if a spell check possibility would return any hits.
org.apache.solr.common.SolrException: Cursor functionality requires a sort 
containing a uniqueKey field tie breaker
at org.apache.solr.search.CursorMark.(CursorMark.java:93)
at 
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:189)
at 
org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:141)
at 
org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:237)
at 
org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:202)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:255)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
at 
org.eclipseInstead.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:497)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9165) Problems with the spellcheck component running search with cursor

2016-05-26 Thread Yamileydis Veranes (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yamileydis Veranes updated SOLR-9165:
-
Summary: Problems with the spellcheck component  running search with cursor 
 (was: Problems with the spellcheck component  running a search with cursor)

> Problems with the spellcheck component  running search with cursor
> --
>
> Key: SOLR-9165
> URL: https://issues.apache.org/jira/browse/SOLR-9165
> Project: Solr
>  Issue Type: Bug
>  Components: spellchecker
>Affects Versions: 5.2
>Reporter: Yamileydis Veranes
>
> I'm having some problems with the spellcheck component, specifically, running 
> a search with cursors.  
> When I run the following query:
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc
> the following collations are returned
> 
> 
> incendio
> 485
> 
> incendio
> 
> 
> 
> Instead, when I try to run the same query but this time using cursors
> http://192.1.1.13:8983/solr/docs/search?q=insendio=/search=192.1.1.14:8983/solr/docs,192.1.1.15:8983/solr/docs=id=true=score
>  desc,id asc=*
> no collations are returned
> false
> and the server trace the following exception message.
> WARN  - 2016-05-26 14:14:58.472; [docs shard2 core_node4 
> docs_shard2_replica1] org.apache.solr.spelling.SpellCheckCollator; Exception 
> trying to re-query to check if a spell check possibility would return any 
> hits.
> org.apache.solr.common.SolrException: Cursor functionality requires a sort 
> containing a uniqueKey field tie breaker
>   at org.apache.solr.search.CursorMark.(CursorMark.java:93)
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:189)
>   at 
> org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:141)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:237)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.process(SpellCheckComponent.java:202)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:255)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
>   at 
> org.eclipseInstead.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
>   at org.eclipse.jetty.server.Server.handle(Server.java:497)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
>   at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: 

[jira] [Commented] (LUCENE-7278) Make template Calendar configurable in DateRangePrefixTree

2016-05-26 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302689#comment-15302689
 ] 

Steve Rowe commented on LUCENE-7278:


See [~dawid.weiss]'s suggestions from the past: 
[http://markmail.org/message/diu2wpjiiyrlfgh6].

Here's a patch I'm going to try locally:

{noformat}
diff --git 
a/lucene/spatial-extras/src/test/org/apache/lucene/spatial/prefix/tree/DateRangePrefixTreeTest.java
 
b/lucene/spatial-extras/src/test/org/apache/lucene/spatial/prefix/tree/DateRangePrefixTreeTest.java
index d76454e..022c6de 100644
--- 
a/lucene/spatial-extras/src/test/org/apache/lucene/spatial/prefix/tree/DateRangePrefixTreeTest.java
+++ 
b/lucene/spatial-extras/src/test/org/apache/lucene/spatial/prefix/tree/DateRangePrefixTreeTest.java
@@ -32,7 +32,7 @@ import org.locationtech.spatial4j.shape.SpatialRelation;
 
 public class DateRangePrefixTreeTest extends LuceneTestCase {
 
-  @ParametersFactory
+  @ParametersFactory(argumentFormatting = "calendar=%s")
   public static Iterable parameters() {
 return Arrays.asList(new Object[][]{
 {DateRangePrefixTree.DEFAULT_CAL}, 
{DateRangePrefixTree.JAVA_UTIL_TIME_COMPAT_CAL}
{noformat}

> Make template Calendar configurable in DateRangePrefixTree
> --
>
> Key: LUCENE-7278
> URL: https://issues.apache.org/jira/browse/LUCENE-7278
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1
>
> Attachments: LUCENE_7278.patch, LUCENE_7278.patch
>
>
> DateRangePrefixTree (a SpatialPrefixTree designed for dates and date ranges) 
> currently uses a hard-coded Calendar template for making new instances.  This 
> ought to be configurable so that, for example, the Gregorian change date can 
> be configured.  This is particularly important for compatibility with Java 
> 8's java.time API which uses the Gregorian calendar for all time (there is no 
> use of Julian prior to 1582).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9141) ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet and single shard

2016-05-26 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer resolved SOLR-9141.
--
Resolution: Fixed

Minoru, thank you for reporting this one.

> ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet 
> and single shard
> --
>
> Key: SOLR-9141
> URL: https://issues.apache.org/jira/browse/SOLR-9141
> Project: Solr
>  Issue Type: Bug
>  Components: Parallell SQL
>Affects Versions: 6.0
>Reporter: Minoru Osuka
>Assignee: James Dyer
> Attachments: SOLR-9141-test.patch, SOLR-9141-test.patch, 
> SOLR-9141.patch, SOLR-9141.patch, SOLR-9141.patch
>
>
> ClassCastException occurs in /sql request handler using -ORDER BY- GROUP BY 
> clause.
> {noformat}
> $ curl --data-urlencode "stmt=select count(*) from access_log" 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"count(*)":1309},
> {"EOF":true,"RESPONSE_TIME":239}]}}
> $ curl --data-urlencode 'stmt=select response, count(*) as count from 
> access_log group by response' 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"EXCEPTION":"java.lang.ClassCastException: java.lang.Integer cannot be cast 
> to java.lang.Long","EOF":true,"RESPONSE_TIME":53}]}}
> {noformat}
> See following error messages:
> {noformat}
> 2016-05-19 10:18:06.477 ERROR (qtp1791930789-21) [c:access_log s:shard1 
> r:core_node1 x:access_log_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream 
> java.io.IOException: java.lang.ClassCastException: java.lang.Integer cannot 
> be cast to java.lang.Long
> at 
> org.apache.solr.client.solrj.io.stream.FacetStream.open(FacetStream.java:300)
> at 
> org.apache.solr.handler.SQLHandler$LimitStream.open(SQLHandler.java:1265)
> at 
> org.apache.solr.client.solrj.io.stream.SelectStream.open(SelectStream.java:153)
> at 
> org.apache.solr.handler.SQLHandler$MetadataStream.open(SQLHandler.java:1511)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:47)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:362)
> at 
> org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:301)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:167)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:725)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:469)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:308)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:262)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at 

[jira] [Commented] (LUCENE-7278) Make template Calendar configurable in DateRangePrefixTree

2016-05-26 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302680#comment-15302680
 ] 

David Smiley commented on LUCENE-7278:
--

Thanks for bringing this to my attention!

Ugh, it appears the use of \@ParametersFactory means it toString's the 
constructor args, and Calendar happens to have a long toString.  Any 
suggestions [~dweiss]?

> Make template Calendar configurable in DateRangePrefixTree
> --
>
> Key: LUCENE-7278
> URL: https://issues.apache.org/jira/browse/LUCENE-7278
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1
>
> Attachments: LUCENE_7278.patch, LUCENE_7278.patch
>
>
> DateRangePrefixTree (a SpatialPrefixTree designed for dates and date ranges) 
> currently uses a hard-coded Calendar template for making new instances.  This 
> ought to be configurable so that, for example, the Gregorian change date can 
> be configured.  This is particularly important for compatibility with Java 
> 8's java.time API which uses the Gregorian calendar for all time (there is no 
> use of Julian prior to 1582).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9141) ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet and single shard

2016-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302670#comment-15302670
 ] 

ASF subversion and git services commented on SOLR-9141:
---

Commit 1609428786b17135f0d8ba413c4203b88977304b in lucene-solr's branch 
refs/heads/branch_6x from jdyer1
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1609428 ]

SOLR-9141: Fix ClassCastException when using the /sql handler count() function 
with single-shard collections


> ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet 
> and single shard
> --
>
> Key: SOLR-9141
> URL: https://issues.apache.org/jira/browse/SOLR-9141
> Project: Solr
>  Issue Type: Bug
>  Components: Parallell SQL
>Affects Versions: 6.0
>Reporter: Minoru Osuka
>Assignee: James Dyer
> Attachments: SOLR-9141-test.patch, SOLR-9141-test.patch, 
> SOLR-9141.patch, SOLR-9141.patch, SOLR-9141.patch
>
>
> ClassCastException occurs in /sql request handler using -ORDER BY- GROUP BY 
> clause.
> {noformat}
> $ curl --data-urlencode "stmt=select count(*) from access_log" 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"count(*)":1309},
> {"EOF":true,"RESPONSE_TIME":239}]}}
> $ curl --data-urlencode 'stmt=select response, count(*) as count from 
> access_log group by response' 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"EXCEPTION":"java.lang.ClassCastException: java.lang.Integer cannot be cast 
> to java.lang.Long","EOF":true,"RESPONSE_TIME":53}]}}
> {noformat}
> See following error messages:
> {noformat}
> 2016-05-19 10:18:06.477 ERROR (qtp1791930789-21) [c:access_log s:shard1 
> r:core_node1 x:access_log_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream 
> java.io.IOException: java.lang.ClassCastException: java.lang.Integer cannot 
> be cast to java.lang.Long
> at 
> org.apache.solr.client.solrj.io.stream.FacetStream.open(FacetStream.java:300)
> at 
> org.apache.solr.handler.SQLHandler$LimitStream.open(SQLHandler.java:1265)
> at 
> org.apache.solr.client.solrj.io.stream.SelectStream.open(SelectStream.java:153)
> at 
> org.apache.solr.handler.SQLHandler$MetadataStream.open(SQLHandler.java:1511)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:47)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:362)
> at 
> org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:301)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:167)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:725)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:469)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:308)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:262)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> 

[jira] [Commented] (SOLR-9141) ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet and single shard

2016-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302664#comment-15302664
 ] 

ASF subversion and git services commented on SOLR-9141:
---

Commit 4d4030350b79303d6f358612473f4e68570858cc in lucene-solr's branch 
refs/heads/master from jdyer1
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4d40303 ]

SOLR-9141: Fix ClassCastException when using the /sql handler count() function 
with single-shard collections


> ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet 
> and single shard
> --
>
> Key: SOLR-9141
> URL: https://issues.apache.org/jira/browse/SOLR-9141
> Project: Solr
>  Issue Type: Bug
>  Components: Parallell SQL
>Affects Versions: 6.0
>Reporter: Minoru Osuka
>Assignee: James Dyer
> Attachments: SOLR-9141-test.patch, SOLR-9141-test.patch, 
> SOLR-9141.patch, SOLR-9141.patch, SOLR-9141.patch
>
>
> ClassCastException occurs in /sql request handler using -ORDER BY- GROUP BY 
> clause.
> {noformat}
> $ curl --data-urlencode "stmt=select count(*) from access_log" 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"count(*)":1309},
> {"EOF":true,"RESPONSE_TIME":239}]}}
> $ curl --data-urlencode 'stmt=select response, count(*) as count from 
> access_log group by response' 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"EXCEPTION":"java.lang.ClassCastException: java.lang.Integer cannot be cast 
> to java.lang.Long","EOF":true,"RESPONSE_TIME":53}]}}
> {noformat}
> See following error messages:
> {noformat}
> 2016-05-19 10:18:06.477 ERROR (qtp1791930789-21) [c:access_log s:shard1 
> r:core_node1 x:access_log_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream 
> java.io.IOException: java.lang.ClassCastException: java.lang.Integer cannot 
> be cast to java.lang.Long
> at 
> org.apache.solr.client.solrj.io.stream.FacetStream.open(FacetStream.java:300)
> at 
> org.apache.solr.handler.SQLHandler$LimitStream.open(SQLHandler.java:1265)
> at 
> org.apache.solr.client.solrj.io.stream.SelectStream.open(SelectStream.java:153)
> at 
> org.apache.solr.handler.SQLHandler$MetadataStream.open(SQLHandler.java:1511)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:47)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:362)
> at 
> org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:301)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:167)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:725)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:469)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:308)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:262)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> 

[jira] [Assigned] (SOLR-9141) ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet and single shard

2016-05-26 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer reassigned SOLR-9141:


Assignee: James Dyer  (was: Joel Bernstein)

> ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet 
> and single shard
> --
>
> Key: SOLR-9141
> URL: https://issues.apache.org/jira/browse/SOLR-9141
> Project: Solr
>  Issue Type: Bug
>  Components: Parallell SQL
>Affects Versions: 6.0
>Reporter: Minoru Osuka
>Assignee: James Dyer
> Attachments: SOLR-9141-test.patch, SOLR-9141-test.patch, 
> SOLR-9141.patch, SOLR-9141.patch, SOLR-9141.patch
>
>
> ClassCastException occurs in /sql request handler using -ORDER BY- GROUP BY 
> clause.
> {noformat}
> $ curl --data-urlencode "stmt=select count(*) from access_log" 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"count(*)":1309},
> {"EOF":true,"RESPONSE_TIME":239}]}}
> $ curl --data-urlencode 'stmt=select response, count(*) as count from 
> access_log group by response' 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"EXCEPTION":"java.lang.ClassCastException: java.lang.Integer cannot be cast 
> to java.lang.Long","EOF":true,"RESPONSE_TIME":53}]}}
> {noformat}
> See following error messages:
> {noformat}
> 2016-05-19 10:18:06.477 ERROR (qtp1791930789-21) [c:access_log s:shard1 
> r:core_node1 x:access_log_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream 
> java.io.IOException: java.lang.ClassCastException: java.lang.Integer cannot 
> be cast to java.lang.Long
> at 
> org.apache.solr.client.solrj.io.stream.FacetStream.open(FacetStream.java:300)
> at 
> org.apache.solr.handler.SQLHandler$LimitStream.open(SQLHandler.java:1265)
> at 
> org.apache.solr.client.solrj.io.stream.SelectStream.open(SelectStream.java:153)
> at 
> org.apache.solr.handler.SQLHandler$MetadataStream.open(SQLHandler.java:1511)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:47)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:362)
> at 
> org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:301)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:167)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:725)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:469)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:308)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:262)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at 

[jira] [Commented] (LUCENE-7278) Make template Calendar configurable in DateRangePrefixTree

2016-05-26 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302647#comment-15302647
 ] 

Steve Rowe commented on LUCENE-7278:


Clover has been failing on ASF Jenkins since this was committed, e.g. from 
[https://builds.apache.org/job/Lucene-Solr-Clover-master/438/consoleText]:

{noformat}
Caused by: java.io.FileNotFoundException: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Clover-master/lucene/build/clover/reports/org/apache/lucene/spatial/prefix/tree/DateRangePrefixTreeTest_testToStringISO8601__p0_java_util_GregorianCalendar_time___areFieldsSet_false_areAllFieldsSet_false_lenient_true_zone_sun_util_calendar_ZoneInfo_id__UTC__offset_0_dstSavings_0_useDaylight_false_transitions_0_lastRule_null__firstDayOfWeek_2_minimalDaysInFirstWeek_4_ERA___YEAR___MONTH___WEEK_OF_YEAR___WEEK_OF_MONTH___DAY_OF_MONTH___DAY_OF_YEAR___DAY_OF_WEEK___DAY_OF_WEEK_IN_MONTH___AM_PM___HOUR___HOUR_OF_DAY___MINUTE___SECOND___MILLISECOND___ZONE_OFFSET___DST_OFFSET_-535x98.html
 (File name too long)
{noformat}

> Make template Calendar configurable in DateRangePrefixTree
> --
>
> Key: LUCENE-7278
> URL: https://issues.apache.org/jira/browse/LUCENE-7278
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.1
>
> Attachments: LUCENE_7278.patch, LUCENE_7278.patch
>
>
> DateRangePrefixTree (a SpatialPrefixTree designed for dates and date ranges) 
> currently uses a hard-coded Calendar template for making new instances.  This 
> ought to be configurable so that, for example, the Gregorian change date can 
> be configured.  This is particularly important for compatibility with Java 
> 8's java.time API which uses the Gregorian calendar for all time (there is no 
> use of Julian prior to 1582).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7304) Doc values based block join implementation

2016-05-26 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302642#comment-15302642
 ] 

Adrien Grand commented on LUCENE-7304:
--

If we switched block joins to use numeric doc values, I am wondering if we 
would ever need to read doc values in reverse order? The reason I am asking is 
that there have been some tensions to cut over doc values to an iterator API in 
order to improve compression and better deal with sparse doc values, see eg. 
LUCENE-7253:

> Doc values based block join implementation
> --
>
> Key: LUCENE-7304
> URL: https://issues.apache.org/jira/browse/LUCENE-7304
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Martijn van Groningen
>Priority: Minor
> Attachments: LUCENE_7304.patch
>
>
> At query time the block join relies on a bitset for finding the previous 
> parent doc during advancing the doc id iterator. On large indices these 
> bitsets can consume large amounts of jvm heap space.  Also typically due the 
> nature how these bitsets are set, the 'FixedBitSet' implementation is used.
> The idea I had was to replace the bitset usage by a numeric doc values field 
> that stores offsets. Each child doc stores how many docids it is from its 
> parent doc and each parent stores how many docids it is apart from its first 
> child. At query time this information can be used to perform the block join.
> I think another benefit of this approach is that external tools can now 
> easily determine if a doc is part of a block of documents and perhaps this 
> also helps index time sorting?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-master - Build # 1023 - Still Failing

2016-05-26 Thread Alan Woodward
Ah, I'd missed that.  Thanks!

Weirdly, it reproduces from the command-line but not from my IDE.  Some 
old-fashioned debugging coming up, then...

Alan Woodward
www.flax.co.uk


On 26 May 2016, at 18:15, Michael McCandless wrote:

> Thanks Alan.
> 
> I think this is the issue for it: 
> https://issues.apache.org/jira/browse/LUCENE-7236
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com
> 
> On Thu, May 26, 2016 at 1:11 PM, Alan Woodward  wrote:
> This reproduces.  Will dig.
> 
> Alan Woodward
> www.flax.co.uk
> 
> 
> On 26 May 2016, at 18:04, Apache Jenkins Server wrote:
> 
>> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1023/
>> 
>> 1 tests failed.
>> FAILED:  org.apache.lucene.search.spans.TestSpanCollection.testOrQuery
>> 
>> Error Message:
>> Missing term field:w3
>> 
>> Stack Trace:
>> java.lang.AssertionError: Missing term field:w3
>>  at 
>> __randomizedtesting.SeedInfo.seed([4EFA0BE479D6EF44:2E9E40D188849A0C]:0)
>>  at org.junit.Assert.fail(Assert.java:93)
>>  at org.junit.Assert.assertTrue(Assert.java:43)
>>  at 
>> org.apache.lucene.search.spans.TestSpanCollection.checkCollectedTerms(TestSpanCollection.java:103)
>>  at 
>> org.apache.lucene.search.spans.TestSpanCollection.testOrQuery(TestSpanCollection.java:147)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>  at 
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>>  at 
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>  at java.lang.reflect.Method.invoke(Method.java:498)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
>>  at 
>> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>>  at 
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>>  at 
>> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>>  at 
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>>  at 
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>>  at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
>>  at 
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
>>  at 
>> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
>>  at 
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at 
>> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at 
>> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>>  at 
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>>  at 
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>>  at 
>> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>>  at 
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at 
>> 

[jira] [Commented] (LUCENE-7304) Doc values based block join implementation

2016-05-26 Thread Martijn van Groningen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302569#comment-15302569
 ] 

Martijn van Groningen commented on LUCENE-7304:
---

bq. I wonder... instead couldn't we get a DocIdSetIterator of parent docs and 
kind of intersect it with the child DISI?

I wondered that a while ago too, but we can't go backwards with 
`DocIdSetIterator` and this what the advance method 
('parentBits.prevSetBit(parentTarget-1)') requires of the block join query to 
figure out where the first child starts for 'parentTarget'.

> Doc values based block join implementation
> --
>
> Key: LUCENE-7304
> URL: https://issues.apache.org/jira/browse/LUCENE-7304
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Martijn van Groningen
>Priority: Minor
> Attachments: LUCENE_7304.patch
>
>
> At query time the block join relies on a bitset for finding the previous 
> parent doc during advancing the doc id iterator. On large indices these 
> bitsets can consume large amounts of jvm heap space.  Also typically due the 
> nature how these bitsets are set, the 'FixedBitSet' implementation is used.
> The idea I had was to replace the bitset usage by a numeric doc values field 
> that stores offsets. Each child doc stores how many docids it is from its 
> parent doc and each parent stores how many docids it is apart from its first 
> child. At query time this information can be used to perform the block join.
> I think another benefit of this approach is that external tools can now 
> easily determine if a doc is part of a block of documents and perhaps this 
> also helps index time sorting?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9160) Sync 6x and 7.0 UninvertingReader for Solr

2016-05-26 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302567#comment-15302567
 ] 

Yonik Seeley commented on SOLR-9160:


I'll tackle this in a day or so barring objections (this is a rather binary 
issue... we either do it or we don't).

> Sync 6x and 7.0 UninvertingReader for Solr
> --
>
> Key: SOLR-9160
> URL: https://issues.apache.org/jira/browse/SOLR-9160
> Project: Solr
>  Issue Type: Improvement
>Reporter: Yonik Seeley
>
> LUCENE-7283 migrated some classes like UninvertedReader from Lucene to Solr 
> in master (7) but not 6x (to give time for deprecation in Lucene).
> Given we are only on 6.0 release, it may be nice to make the same changes 
> (under /solr only) in 6x to ease backporting and allow customization/changes 
> for Solr in the 6x line.
> One method might be to cherry-pick the change from LUCENE-7283 and then 
> revert just the /lucene directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7306) Use radix sort for points too

2016-05-26 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302533#comment-15302533
 ] 

Adrien Grand commented on LUCENE-7306:
--

I started playing with the last dimension since it was a low hanging fruit, but 
I'll explore if we can make things better for the other dimensions and the heap 
writer too.

> Use radix sort for points too
> -
>
> Key: LUCENE-7306
> URL: https://issues.apache.org/jira/browse/LUCENE-7306
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7903.patch
>
>
> Like postings, points make heavy use of sorting at indexing time, so we 
> should try to leverage radix sort too?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7374) Backup/Restore should provide a param for specifying the directory implementation it should use

2016-05-26 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302519#comment-15302519
 ] 

Mark Miller commented on SOLR-7374:
---

Where are we at here? I'd really like to get this in so that SOLR-9055 can also 
be wrapped up. What do you think @varunthacker1989 

> Backup/Restore should provide a param for specifying the directory 
> implementation it should use
> ---
>
> Key: SOLR-7374
> URL: https://issues.apache.org/jira/browse/SOLR-7374
> Project: Solr
>  Issue Type: Bug
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Fix For: 5.2, 6.0
>
> Attachments: SOLR-7374.patch, SOLR-7374.patch, SOLR-7374.patch
>
>
> Currently when we create a backup we use SimpleFSDirectory to write the 
> backup indexes. Similarly during a restore we open the index using 
> FSDirectory.open . 
> We should provide a param called {{directoryImpl}} or {{type}} which will be 
> used to specify the Directory implementation to backup the index. 
> Likewise during a restore you would need to specify the directory impl which 
> was used during backup so that the index can be opened correctly.
> This param will address the problem that currently if a user is running Solr 
> on HDFS there is no way to use the backup/restore functionality as the 
> directory is hardcoded.
> With this one could be running Solr on a local FS but backup the index on 
> HDFS etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-master - Build # 1023 - Still Failing

2016-05-26 Thread Michael McCandless
Thanks Alan.

I think this is the issue for it:
https://issues.apache.org/jira/browse/LUCENE-7236

Mike McCandless

http://blog.mikemccandless.com

On Thu, May 26, 2016 at 1:11 PM, Alan Woodward  wrote:

> This reproduces.  Will dig.
>
> Alan Woodward
> www.flax.co.uk
>
>
> On 26 May 2016, at 18:04, Apache Jenkins Server wrote:
>
> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1023/
>
> 1 tests failed.
> FAILED:  org.apache.lucene.search.spans.TestSpanCollection.testOrQuery
>
> Error Message:
> Missing term field:w3
>
> Stack Trace:
> java.lang.AssertionError: Missing term field:w3
> at __randomizedtesting.SeedInfo.seed([4EFA0BE479D6EF44:2E9E40D188849A0C]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at
> org.apache.lucene.search.spans.TestSpanCollection.checkCollectedTerms(TestSpanCollection.java:103)
> at
> org.apache.lucene.search.spans.TestSpanCollection.testOrQuery(TestSpanCollection.java:147)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
> at
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>
> Build Log:
> [...truncated 507 lines...]
>   [junit4] Suite: org.apache.lucene.search.spans.TestSpanCollection
>   [junit4]   2> NOTE: download the large Jenkins line-docs file by running
> 'ant get-jenkins-line-docs' in the lucene directory.
>   [junit4]   2> NOTE: reproduce with: ant test
>  -Dtestcase=TestSpanCollection -Dtests.method=testOrQuery
> -Dtests.seed=4EFA0BE479D6EF44 -Dtests.multiplier=2 -Dtests.nightly=true
> -Dtests.slow=true
> 

Re: [JENKINS] Lucene-Solr-NightlyTests-master - Build # 1023 - Still Failing

2016-05-26 Thread Alan Woodward
This reproduces.  Will dig.

Alan Woodward
www.flax.co.uk


On 26 May 2016, at 18:04, Apache Jenkins Server wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1023/
> 
> 1 tests failed.
> FAILED:  org.apache.lucene.search.spans.TestSpanCollection.testOrQuery
> 
> Error Message:
> Missing term field:w3
> 
> Stack Trace:
> java.lang.AssertionError: Missing term field:w3
>   at 
> __randomizedtesting.SeedInfo.seed([4EFA0BE479D6EF44:2E9E40D188849A0C]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at org.junit.Assert.assertTrue(Assert.java:43)
>   at 
> org.apache.lucene.search.spans.TestSpanCollection.checkCollectedTerms(TestSpanCollection.java:103)
>   at 
> org.apache.lucene.search.spans.TestSpanCollection.testOrQuery(TestSpanCollection.java:147)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
>   at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
>   at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
>   at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>   at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
>   at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
>   at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
>   at 
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
>   at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
>   at java.lang.Thread.run(Thread.java:745)
> 
> 
> 
> 
> Build Log:
> [...truncated 507 lines...]
>   [junit4] Suite: org.apache.lucene.search.spans.TestSpanCollection
>   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
> 'ant get-jenkins-line-docs' in the lucene directory.
>   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSpanCollection 
> -Dtests.method=testOrQuery -Dtests.seed=4EFA0BE479D6EF44 

[jira] [Commented] (LUCENE-7306) Use radix sort for points too

2016-05-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302459#comment-15302459
 ] 

Michael McCandless commented on LUCENE-7306:


+1, wonderful!

> Use radix sort for points too
> -
>
> Key: LUCENE-7306
> URL: https://issues.apache.org/jira/browse/LUCENE-7306
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7903.patch
>
>
> Like postings, points make heavy use of sorting at indexing time, so we 
> should try to leverage radix sort too?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1023 - Still Failing

2016-05-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1023/

1 tests failed.
FAILED:  org.apache.lucene.search.spans.TestSpanCollection.testOrQuery

Error Message:
Missing term field:w3

Stack Trace:
java.lang.AssertionError: Missing term field:w3
at 
__randomizedtesting.SeedInfo.seed([4EFA0BE479D6EF44:2E9E40D188849A0C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.lucene.search.spans.TestSpanCollection.checkCollectedTerms(TestSpanCollection.java:103)
at 
org.apache.lucene.search.spans.TestSpanCollection.testOrQuery(TestSpanCollection.java:147)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 507 lines...]
   [junit4] Suite: org.apache.lucene.search.spans.TestSpanCollection
   [junit4]   2> NOTE: download the large Jenkins line-docs file by running 
'ant get-jenkins-line-docs' in the lucene directory.
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSpanCollection 
-Dtests.method=testOrQuery -Dtests.seed=4EFA0BE479D6EF44 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/x1/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=ja-JP-u-ca-japanese-x-lvariant-JP -Dtests.timezone=NZ-CHAT 
-Dtests.asserts=true 

[jira] [Commented] (SOLR-8981) Upgrade to Tika 1.13 when it is available

2016-05-26 Thread Lewis John McGibbney (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302442#comment-15302442
 ] 

Lewis John McGibbney commented on SOLR-8981:


I am working on this again and will try to post a patch ASAP. 
[~talli...@mitre.org]. I have the following test failing in Solr
https://github.com/apache/lucene-solr/blob/master/solr/contrib/extraction/src/test/org/apache/solr/handler/extraction/ExtractingRequestHandlerTest.java#L505
I have been debugging the tests with no luck as of yet. I'll post a new PR 
later today. The new PR is rebased against lucene-solr master and Tika 1.13

> Upgrade to Tika 1.13 when it is available
> -
>
> Key: SOLR-8981
> URL: https://issues.apache.org/jira/browse/SOLR-8981
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Priority: Minor
>
> Tika 1.13 should be out within a month.  This includes PDFBox 2.0.0 and a 
> number of other upgrades and improvements.  
> If there are any showstoppers in 1.13 from Solr's side or requests before we 
> roll 1.13, let us know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 157 - Failure!

2016-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/157/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication

Error Message:
timed out waiting for collection1 startAt time to exceed: Thu May 26 19:00:18 
EEST 2016

Stack Trace:
java.lang.AssertionError: timed out waiting for collection1 startAt time to 
exceed: Thu May 26 19:00:18 EEST 2016
at 
__randomizedtesting.SeedInfo.seed([4F21D7B470D3B2D3:B85239ECB63B1D35]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.handler.TestReplicationHandler.watchCoreStartAt(TestReplicationHandler.java:1508)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigAliasReplication(TestReplicationHandler.java:1314)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  

[jira] [Updated] (LUCENE-7306) Use radix sort for points too

2016-05-26 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7306:
-
Attachment: LUCENE-7903.patch

Here is a simple patch that uses radix sorting on the last dimension (which is 
convenient since the bytes for the dimension and for the doc id are contiguous).

I used IndexAndSearchOpenStreetMaps to benchmark. The indexing time went from 
344s to 327s (-5%). Here are the first 30 logs for merging points in both cases:

Master
{code}
SM 0 [2016-05-26T16:28:35.224Z; Thread-0]: 2414 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:28:39.390Z; Thread-0]: 1899 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:28:43.443Z; Thread-0]: 1869 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:28:47.426Z; Thread-0]: 1812 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:28:51.444Z; Thread-0]: 1850 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:28:55.422Z; Thread-0]: 1819 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:28:59.409Z; Thread-0]: 1823 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:29:03.368Z; Thread-0]: 1817 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:29:07.296Z; Thread-0]: 1802 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:29:11.205Z; Thread-0]: 1793 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:29:34.980Z; Thread-0]: 23722 msec to merge points [10963000 
docs]
SM 0 [2016-05-26T16:29:38.934Z; Thread-0]: 1798 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:29:42.844Z; Thread-0]: 1779 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:29:46.849Z; Thread-0]: 1797 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:29:50.866Z; Thread-0]: 1802 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:29:54.917Z; Thread-0]: 1820 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:29:58.965Z; Thread-0]: 1823 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:30:02.889Z; Thread-0]: 1783 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:30:06.815Z; Thread-0]: 1785 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:30:10.835Z; Thread-0]: 1876 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:30:14.759Z; Thread-0]: 1790 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:30:37.886Z; Thread-0]: 23085 msec to merge points [10963000 
docs]
SM 0 [2016-05-26T16:30:41.777Z; Thread-0]: 1783 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:30:45.837Z; Thread-0]: 1783 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:30:49.731Z; Thread-0]: 1785 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:30:53.624Z; Thread-0]: 1776 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:30:57.536Z; Thread-0]: 1782 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:31:01.512Z; Thread-0]: 1787 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:31:05.477Z; Thread-0]: 1786 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:31:09.889Z; Thread-0]: 1770 msec to merge points [1096300 
docs]
{code}

Patch
{code}
SM 0 [2016-05-26T16:20:21.241Z; Thread-0]: 2405 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:20:25.072Z; Thread-0]: 1583 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:20:28.834Z; Thread-0]: 1537 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:20:32.546Z; Thread-0]: 1489 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:20:36.426Z; Thread-0]: 1524 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:20:40.263Z; Thread-0]: 1519 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:20:44.123Z; Thread-0]: 1511 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:20:48.013Z; Thread-0]: 1506 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:20:51.807Z; Thread-0]: 1486 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:20:55.882Z; Thread-0]: 1479 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:21:17.042Z; Thread-0]: 21106 msec to merge points [10963000 
docs]
SM 0 [2016-05-26T16:21:20.872Z; Thread-0]: 1517 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:21:24.629Z; Thread-0]: 1467 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:21:28.408Z; Thread-0]: 1479 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:21:32.219Z; Thread-0]: 1485 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:21:36.108Z; Thread-0]: 1501 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:21:39.982Z; Thread-0]: 1504 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:21:44.836Z; Thread-0]: 1502 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:21:48.717Z; Thread-0]: 1499 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:21:52.548Z; Thread-0]: 1503 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:21:56.436Z; Thread-0]: 1514 msec to merge points [1096300 
docs]
SM 0 [2016-05-26T16:22:17.361Z; Thread-0]: 20883 msec to merge 

[jira] [Commented] (SOLR-7887) Upgrade Solr to use log4j2 -- log4j 1 now officially end of life

2016-05-26 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302432#comment-15302432
 ] 

Keith Laban commented on SOLR-7887:
---

Is there any reason the the upgrade and patch [~thelabdude] mentions above 
hasn't made into master?

> Upgrade Solr to use log4j2 -- log4j 1 now officially end of life
> 
>
> Key: SOLR-7887
> URL: https://issues.apache.org/jira/browse/SOLR-7887
> Project: Solr
>  Issue Type: Task
>Affects Versions: 5.2.1
>Reporter: Shawn Heisey
>
> The logging services project has officially announced the EOL of log4j 1:
> https://blogs.apache.org/foundation/entry/apache_logging_services_project_announces
> In the official binary jetty deployment, we use use log4j 1.2 as our final 
> logging destination, so the admin UI has a log watcher that actually uses 
> log4j and java.util.logging classes.  That will need to be extended to add 
> log4j2.  I think that might be the largest pain point to this upgrade.
> There is some crossover between log4j2 and slf4j.  Figuring out exactly which 
> jars need to be in the lib/ext directory will take some research.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8988) Improve facet.method=fcs performance in SolrCloud

2016-05-26 Thread Keith Laban (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302430#comment-15302430
 ] 

Keith Laban commented on SOLR-8988:
---

Thats right. This affects all queries where {{isDistrib}} is true for any 
reason.

> Improve facet.method=fcs performance in SolrCloud
> -
>
> Key: SOLR-8988
> URL: https://issues.apache.org/jira/browse/SOLR-8988
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.5, 6.0
>Reporter: Keith Laban
>Assignee: Dennis Gove
> Fix For: 6.1
>
> Attachments: SOLR-8988.patch, SOLR-8988.patch, SOLR-8988.patch, 
> SOLR-8988.patch, Screen Shot 2016-04-25 at 2.54.47 PM.png, Screen Shot 
> 2016-04-25 at 2.55.00 PM.png
>
>
> This relates to SOLR-8559 -- which improves the algorithm used by fcs 
> faceting when {{facet.mincount=1}}
> This patch allows {{facet.mincount}} to be sent as 1 for distributed queries. 
> As far as I can tell there is no reason to set {{facet.mincount=0}} for 
> refinement purposes . After trying to make sense of all the refinement logic, 
> I cant see how the difference between _no value_ and _value=0_ would have a 
> negative effect.
> *Test perf:*
> - ~15million unique terms
> - query matches ~3million documents
> *Params:*
> {code}
> facet.mincount=1
> facet.limit=500
> facet.method=fcs
> facet.sort=count
> {code}
> *Average Time Per Request:*
> - Before patch:  ~20seconds
> - After patch: <1 second
> *Note*: all tests pass and in my test, the output was identical before and 
> after patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8981) Upgrade to Tika 1.13 when it is available

2016-05-26 Thread Tim Allison (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302427#comment-15302427
 ] 

Tim Allison commented on SOLR-8981:
---

CVE-2016-4434: Apache Tika XML External Entity vulnerability in versions 
0.10-1.12: 
[announcement|https://mail-archives.apache.org/mod_mbox/tika-dev/201605.mbox/%3C1705136517.1175366.1464278135251.JavaMail.yahoo%40mail.yahoo.com%3E]

> Upgrade to Tika 1.13 when it is available
> -
>
> Key: SOLR-8981
> URL: https://issues.apache.org/jira/browse/SOLR-8981
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tim Allison
>Priority: Minor
>
> Tika 1.13 should be out within a month.  This includes PDFBox 2.0.0 and a 
> number of other upgrades and improvements.  
> If there are any showstoppers in 1.13 from Solr's side or requests before we 
> roll 1.13, let us know.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7306) Use radix sort for points too

2016-05-26 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7306:


 Summary: Use radix sort for points too
 Key: LUCENE-7306
 URL: https://issues.apache.org/jira/browse/LUCENE-7306
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor


Like postings, points make heavy use of sorting at indexing time, so we should 
try to leverage radix sort too?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-6.0 - Build # 20 - Failure

2016-05-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.0/20/

No tests ran.

Build Log:
[...truncated 40031 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 245 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 
JAVA_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (12.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.0.1-src.tgz...
   [smoker] 28.5 MB in 0.02 sec (1171.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.1.tgz...
   [smoker] 62.9 MB in 0.05 sec (1151.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.0.1.zip...
   [smoker] 73.6 MB in 0.06 sec (1163.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.0.1.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6045 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.1.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6045 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.0.1-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 215 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (74.1 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.0.1-src.tgz...
   [smoker] 37.6 MB in 0.03 sec (1085.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.0.1.tgz...
   [smoker] 131.5 MB in 0.12 sec (1088.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.0.1.zip...
   [smoker] 140.0 MB in 0.12 sec (1130.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.0.1.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.0.1.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.1/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.1/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.1-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.1-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.0/lucene/build/smokeTestRelease/tmp/unpack/solr-6.0.1-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 30 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]   [|]   [/]   [-]  
   

[JENKINS] Lucene-Solr-6.0-Linux (32bit/jdk1.8.0_92) - Build # 187 - Still Failing!

2016-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.0-Linux/187/
Java: 32bit/jdk1.8.0_92 -client -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.security.BasicAuthIntegrationTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([50E423FCC1606504]:0)


FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasics

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([50E423FCC1606504]:0)




Build Log:
[...truncated 18 lines...]
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:810)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1066)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1097)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: hudson.plugins.git.GitException: 
org.eclipse.jgit.api.errors.TransportException: Connection reset
at 
org.jenkinsci.plugins.gitclient.JGitAPIImpl$2.execute(JGitAPIImpl.java:639)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:808)
... 11 more
Caused by: org.eclipse.jgit.api.errors.TransportException: Connection reset
at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:139)
at 
org.jenkinsci.plugins.gitclient.JGitAPIImpl$2.execute(JGitAPIImpl.java:637)
... 12 more
Caused by: org.eclipse.jgit.errors.TransportException: Connection reset
at 
org.eclipse.jgit.transport.BasePackConnection.readAdvertisedRefs(BasePackConnection.java:182)
at 
org.eclipse.jgit.transport.TransportGitAnon$TcpFetchConnection.(TransportGitAnon.java:194)
at 
org.eclipse.jgit.transport.TransportGitAnon.openFetch(TransportGitAnon.java:120)
at 
org.eclipse.jgit.transport.FetchProcess.executeImp(FetchProcess.java:136)
at 
org.eclipse.jgit.transport.FetchProcess.execute(FetchProcess.java:122)
at org.eclipse.jgit.transport.Transport.fetch(Transport.java:1138)
at org.eclipse.jgit.api.FetchCommand.call(FetchCommand.java:130)
... 13 more
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:196)
at java.net.SocketInputStream.read(SocketInputStream.java:122)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:235)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:275)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at org.eclipse.jgit.util.IO.readFully(IO.java:246)
at 
org.eclipse.jgit.transport.PacketLineIn.readLength(PacketLineIn.java:186)
at 
org.eclipse.jgit.transport.PacketLineIn.readString(PacketLineIn.java:138)
at 
org.eclipse.jgit.transport.BasePackConnection.readAdvertisedRefsImpl(BasePackConnection.java:195)
at 
org.eclipse.jgit.transport.BasePackConnection.readAdvertisedRefs(BasePackConnection.java:176)
... 19 more
ERROR: null
Retrying after 10 seconds
Fetching changes from the remote Git repository
Cleaning workspace
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
git://git.apache.org/lucene-solr.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:810)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1066)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1097)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1269)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:604)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:529)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:410)
Caused by: 

[jira] [Commented] (LUCENE-7304) Doc values based block join implementation

2016-05-26 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302245#comment-15302245
 ] 

David Smiley commented on LUCENE-7304:
--

This is interesting. I wonder... instead couldn't we get a DocIdSetIterator of 
parent docs and kind of intersect it with the child DISI?  (no bitset, no 
potentially fragile encoding of relative doc ID offsets). This is a half-baked 
idea and I'm not sure if it even makes any sense :-P so take it with a grain of 
salt!

> Doc values based block join implementation
> --
>
> Key: LUCENE-7304
> URL: https://issues.apache.org/jira/browse/LUCENE-7304
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Martijn van Groningen
>Priority: Minor
> Attachments: LUCENE_7304.patch
>
>
> At query time the block join relies on a bitset for finding the previous 
> parent doc during advancing the doc id iterator. On large indices these 
> bitsets can consume large amounts of jvm heap space.  Also typically due the 
> nature how these bitsets are set, the 'FixedBitSet' implementation is used.
> The idea I had was to replace the bitset usage by a numeric doc values field 
> that stores offsets. Each child doc stores how many docids it is from its 
> parent doc and each parent stores how many docids it is apart from its first 
> child. At query time this information can be used to perform the block join.
> I think another benefit of this approach is that external tools can now 
> easily determine if a doc is part of a block of documents and perhaps this 
> also helps index time sorting?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 74 - Failure

2016-05-26 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/74/

1 tests failed.
FAILED:  org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=50121, name=collection4, 
state=RUNNABLE, group=TGRP-HdfsCollectionsAPIDistributedZkTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=50121, name=collection4, state=RUNNABLE, 
group=TGRP-HdfsCollectionsAPIDistributedZkTest]
Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:50220: Could not find collection : 
awholynewstresscollection_collection4_4
at __randomizedtesting.SeedInfo.seed([3B1B88D3E57A4A81]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:590)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:259)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:248)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:404)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:357)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1228)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:998)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:934)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest$1CollectionThread.run(CollectionsAPIDistributedZkTest.java:994)




Build Log:
[...truncated 12402 lines...]
   [junit4] JVM J2: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/solr/build/solr-core/test/temp/junit4-J2-20160526_132605_163.sysout
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] Dumping heap to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/heapdumps/java_pid32111.hprof
 ...
   [junit4] Heap dump file created [605410946 bytes in 8.599 secs]
   [junit4] <<< JVM J2: EOF 

   [junit4] JVM J2: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/solr/build/solr-core/test/temp/junit4-J2-20160526_132605_163.syserr
   [junit4] >>> JVM J2 emitted unexpected output (verbatim) 
   [junit4] WARN: Unhandled exception in event serialization. -> 
java.lang.OutOfMemoryError: GC overhead limit exceeded
   [junit4] <<< JVM J2: EOF 

[...truncated 193 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest
   [junit4]   2> Creating dataDir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-6.x/solr/build/solr-core/test/J1/temp/solr.cloud.hdfs.HdfsCollectionsAPIDistributedZkTest_3B1B88D3E57A4A81-001/init-core-data-001
   [junit4]   2> 4664708 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[3B1B88D3E57A4A81]-worker) [
] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false)
   [junit4]   2> 4664710 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[3B1B88D3E57A4A81]-worker) [
] o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   1> Formatting using clusterid: testClusterID
   [junit4]   2> 4664751 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[3B1B88D3E57A4A81]-worker) [
] o.a.h.m.i.MetricsConfig Cannot locate configuration: tried 
hadoop-metrics2-namenode.properties,hadoop-metrics2.properties
   [junit4]   2> 4664770 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[3B1B88D3E57A4A81]-worker) [
] o.a.h.h.HttpRequestLog Jetty request log can only be enabled using Log4j
   [junit4]   2> 4664771 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[3B1B88D3E57A4A81]-worker) [
] o.m.log jetty-6.1.26
   [junit4]   2> 4664781 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[3B1B88D3E57A4A81]-worker) [
] o.m.log Extract 
jar:file:/x1/jenkins/.ivy2/cache/org.apache.hadoop/hadoop-hdfs/tests/hadoop-hdfs-2.6.0-tests.jar!/webapps/hdfs
 to ./temp/Jetty_localhost_37901_hdfs.jww2m8/webapp
   [junit4]   2> 4664869 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[3B1B88D3E57A4A81]-worker) [
] o.m.log NO JSP Support for /, did not find 
org.apache.jasper.servlet.JspServlet
   [junit4]   2> 4665164 INFO  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[3B1B88D3E57A4A81]-worker) [
] o.m.log Started 
HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:37901
   [junit4]   2> 4665293 WARN  
(SUITE-HdfsCollectionsAPIDistributedZkTest-seed#[3B1B88D3E57A4A81]-worker) [
] o.a.h.h.HttpRequestLog Jetty request log can only be 

[jira] [Comment Edited] (SOLR-8776) Support RankQuery in grouping

2016-05-26 Thread Diego Ceccarelli (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302188#comment-15302188
 ] 

Diego Ceccarelli edited comment on SOLR-8776 at 5/26/16 3:01 PM:
-

Thanks [~aanilpala], a file was missing in the patch, I just submitted a new 
patch with the missing file, and I tested it on the latest upstream version 
(last commit 268da5be4), please do not hesitate to contact me if you have 
comments :) 


was (Author: diegoceccarelli):
add Add RerankTermSecondPassGroupingCollector


> Support RankQuery in grouping
> -
>
> Key: SOLR-8776
> URL: https://issues.apache.org/jira/browse/SOLR-8776
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 6.0
>Reporter: Diego Ceccarelli
>Priority: Minor
> Fix For: 6.0
>
> Attachments: 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch
>
>
> Currently it is not possible to use RankQuery [1] and Grouping [2] together 
> (see also [3]). In some situations Grouping can be replaced by Collapse and 
> Expand Results [4] (that supports reranking), but i) collapse cannot 
> guarantee that at least a minimum number of groups will be returned for a 
> query, and ii) in the Solr Cloud setting you will have constraints on how to 
> partition the documents among the shards.
> I'm going to start working on supporting RankQuery in grouping. I'll start 
> attaching a patch with a test that fails because grouping does not support 
> the rank query and then I'll try to fix the problem, starting from the non 
> distributed setting (GroupingSearch).
> My feeling is that since grouping is mostly performed by Lucene, RankQuery 
> should be refactored and moved (or partially moved) there. 
> Any feedback is welcome.
> [1] https://cwiki.apache.org/confluence/display/solr/RankQuery+API 
> [2] https://cwiki.apache.org/confluence/display/solr/Result+Grouping
> [3] 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201507.mbox/%3ccahm-lpuvspest-sw63_8a6gt-wor6ds_t_nb2rope93e4+s...@mail.gmail.com%3E
> [4] 
> https://cwiki.apache.org/confluence/display/solr/Collapse+and+Expand+Results



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8776) Support RankQuery in grouping

2016-05-26 Thread Diego Ceccarelli (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Diego Ceccarelli updated SOLR-8776:
---
Attachment: 0001-SOLR-8776-Support-RankQuery-in-grouping.patch

add Add RerankTermSecondPassGroupingCollector


> Support RankQuery in grouping
> -
>
> Key: SOLR-8776
> URL: https://issues.apache.org/jira/browse/SOLR-8776
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 6.0
>Reporter: Diego Ceccarelli
>Priority: Minor
> Fix For: 6.0
>
> Attachments: 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch
>
>
> Currently it is not possible to use RankQuery [1] and Grouping [2] together 
> (see also [3]). In some situations Grouping can be replaced by Collapse and 
> Expand Results [4] (that supports reranking), but i) collapse cannot 
> guarantee that at least a minimum number of groups will be returned for a 
> query, and ii) in the Solr Cloud setting you will have constraints on how to 
> partition the documents among the shards.
> I'm going to start working on supporting RankQuery in grouping. I'll start 
> attaching a patch with a test that fails because grouping does not support 
> the rank query and then I'll try to fix the problem, starting from the non 
> distributed setting (GroupingSearch).
> My feeling is that since grouping is mostly performed by Lucene, RankQuery 
> should be refactored and moved (or partially moved) there. 
> Any feedback is welcome.
> [1] https://cwiki.apache.org/confluence/display/solr/RankQuery+API 
> [2] https://cwiki.apache.org/confluence/display/solr/Result+Grouping
> [3] 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201507.mbox/%3ccahm-lpuvspest-sw63_8a6gt-wor6ds_t_nb2rope93e4+s...@mail.gmail.com%3E
> [4] 
> https://cwiki.apache.org/confluence/display/solr/Collapse+and+Expand+Results



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7301) updateNumericDocValue mixed with updateDocument can cause data loss in some randomized testing

2016-05-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302187#comment-15302187
 ] 

Michael McCandless commented on LUCENE-7301:


OK test fails for me:

{noformat}
1) 
testSomeSortOfWeirdFlushIssue(org.apache.lucene.index.TestNumericDocValuesUpdates)
java.lang.AssertionError: expected:<326> but was:<315>
at 
__randomizedtesting.SeedInfo.seed([CD2F76A9BDF7F337:4B62C8600B01B35]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:147)
at 
org.apache.lucene.index.TestNumericDocValuesUpdates.testSomeSortOfWeirdFlushIssue(TestNumericDocValuesUpdates.java:121)
{noformat}

It fails on both 6.x and master ... so it's not related to index sorting (this 
was my first guess!).

> updateNumericDocValue mixed with updateDocument can cause data loss in some 
> randomized testing
> --
>
> Key: LUCENE-7301
> URL: https://issues.apache.org/jira/browse/LUCENE-7301
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: LUCENE-7301.patch
>
>
> SOLR-5944 has been held up by a while due to some extremely rare randomized 
> test failures.
> Ishan and I have been working on whitling those Solr test failures down, 
> trying to create more isolated reproducable test failures, and i *think* i've 
> tracked it down to a bug in IndexWriter when the client calls to 
> updateDocument intermixed with calls to updateNumericDocValue *AND* 
> IndexWriterConfig.setMaxBufferedDocs is very low (i suspect "how low" depends 
> on the number of quantity/types of updates -- but *just* got something that 
> reproduced, and haven't tried reproducing with higher values of 
> maxBufferedDocs and larger sequences of updateDocument / 
> updateNumericDocValue calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7301) updateNumericDocValue mixed with updateDocument can cause data loss in some randomized testing

2016-05-26 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302178#comment-15302178
 ] 

Michael McCandless commented on LUCENE-7301:


Thanks [~hossman] I'll have a look!  Love the test name :)

> updateNumericDocValue mixed with updateDocument can cause data loss in some 
> randomized testing
> --
>
> Key: LUCENE-7301
> URL: https://issues.apache.org/jira/browse/LUCENE-7301
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
> Attachments: LUCENE-7301.patch
>
>
> SOLR-5944 has been held up by a while due to some extremely rare randomized 
> test failures.
> Ishan and I have been working on whitling those Solr test failures down, 
> trying to create more isolated reproducable test failures, and i *think* i've 
> tracked it down to a bug in IndexWriter when the client calls to 
> updateDocument intermixed with calls to updateNumericDocValue *AND* 
> IndexWriterConfig.setMaxBufferedDocs is very low (i suspect "how low" depends 
> on the number of quantity/types of updates -- but *just* got something that 
> reproduced, and haven't tried reproducing with higher values of 
> maxBufferedDocs and larger sequences of updateDocument / 
> updateNumericDocValue calls.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9141) ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet and single shard

2016-05-26 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-9141:
-
Attachment: SOLR-9141.patch

> ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet 
> and single shard
> --
>
> Key: SOLR-9141
> URL: https://issues.apache.org/jira/browse/SOLR-9141
> Project: Solr
>  Issue Type: Bug
>  Components: Parallell SQL
>Affects Versions: 6.0
>Reporter: Minoru Osuka
>Assignee: Joel Bernstein
> Attachments: SOLR-9141-test.patch, SOLR-9141-test.patch, 
> SOLR-9141.patch, SOLR-9141.patch, SOLR-9141.patch
>
>
> ClassCastException occurs in /sql request handler using -ORDER BY- GROUP BY 
> clause.
> {noformat}
> $ curl --data-urlencode "stmt=select count(*) from access_log" 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"count(*)":1309},
> {"EOF":true,"RESPONSE_TIME":239}]}}
> $ curl --data-urlencode 'stmt=select response, count(*) as count from 
> access_log group by response' 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"EXCEPTION":"java.lang.ClassCastException: java.lang.Integer cannot be cast 
> to java.lang.Long","EOF":true,"RESPONSE_TIME":53}]}}
> {noformat}
> See following error messages:
> {noformat}
> 2016-05-19 10:18:06.477 ERROR (qtp1791930789-21) [c:access_log s:shard1 
> r:core_node1 x:access_log_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream 
> java.io.IOException: java.lang.ClassCastException: java.lang.Integer cannot 
> be cast to java.lang.Long
> at 
> org.apache.solr.client.solrj.io.stream.FacetStream.open(FacetStream.java:300)
> at 
> org.apache.solr.handler.SQLHandler$LimitStream.open(SQLHandler.java:1265)
> at 
> org.apache.solr.client.solrj.io.stream.SelectStream.open(SelectStream.java:153)
> at 
> org.apache.solr.handler.SQLHandler$MetadataStream.open(SQLHandler.java:1511)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:47)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:362)
> at 
> org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:301)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:167)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:725)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:469)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:308)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:262)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at 

[jira] [Resolved] (LUCENE-7305) Use macro average in confusion matrix metrics to normalize imbalanced classes

2016-05-26 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved LUCENE-7305.
-
Resolution: Fixed

> Use macro average in confusion matrix metrics to normalize imbalanced classes
> -
>
> Key: LUCENE-7305
> URL: https://issues.apache.org/jira/browse/LUCENE-7305
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 6.1
>
>
> {{ConfusionMatrix}} multi class measures should be based on macro average to 
> avoid bias (for the good or the bad) from imbalanced classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8583) Apply highlighting to hl.alternateField

2016-05-26 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302172#comment-15302172
 ] 

David Smiley commented on SOLR-8583:


This is looking *much* better -- nice job Jan!  That FvhContainer is perfect; 
I'm kicking myself for not thinking of that already.

The test exhibits a problem related to not using hl.requireFieldMatch.  Looking 
at the test, I see we have a query on tv_text yet we're asking to highlight 
t_text (falling back on tv_text as alternate).  What we assert is sensible 
based on these args, but this is an unnatural example.  A more natural example 
is that the query is on t_text -- the same field that is highlighted.  What 
should then happen?  Well, we could try and make it work by setting 
hl.requireFieldMatch=false or we could demand that this be set as a 
prerequisite to highlighting alternate fields.  Or we could leave the logic be 
and document that you most likely need to set this to false (what I'm kinda 
leaning to but I have no conviction).  Note that FVH doesn't support per-field 
overrides of that setting, so if we try to set that ourselves, then it won't 
work with FVH.  How to handle this is debatable. In any case, the tests should 
be expanded to query based on t_text in addition to what it tests now which is 
good to test too.

Minor quibbles:
* can you order the parameters to doHighlightingOfField to have more 
consistency with the other methods that take much of the same parameters: doc, 
docId, schemaField, fvhContainer, query, reader, req


> Apply highlighting to hl.alternateField
> ---
>
> Key: SOLR-8583
> URL: https://issues.apache.org/jira/browse/SOLR-8583
> Project: Solr
>  Issue Type: Improvement
>  Components: highlighter
>Affects Versions: 5.4
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.1
>
> Attachments: SOLR-8583.patch, SOLR-8583.patch, SOLR-8583.patch, 
> SOLR-8583.patch
>
>
> Today, you can configure hl.alternateField for highlighter to display if no 
> snippets were produced from original field. But the contents of the fallback 
> field is output without highlighting the original query terms.
> This issue will cause alternate field to be highlighted with no snippet 
> generation, and still respect max length. You can turn it off using new param 
> {{hl.highlightAlternate=false}}. Supported highlighters: Simple, FVH



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8029) Modernize and standardize Solr APIs

2016-05-26 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301669#comment-15301669
 ] 

Noble Paul edited comment on SOLR-8029 at 5/26/16 2:37 PM:
---

Thanks Cassandra

bq.Schema endpoints don't seem to include GET methods for fields, copyfields or 
dynamic fields.  

Right , those specs are not included in the output. Will add them

bq.Replacements for the Blob Store API and the ConfigSets API are not included?

Not yet, I'm planning to add them to the v2 path as is. 
I need to write the spec for them




was (Author: noble.paul):
Thanks Cassandra

bq.Schema endpoints don't seem to include GET methods for fields, copyfields or 
dynamic fields.  

Right , those specs are not included in the output. Will add them

bq.Replacements for the Blob Store API and the ConfigSets API are not included?

Not yet, I'm planning to add them to the v2 path as is. 



> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7305) Use macro average in confusion matrix metrics to normalize imbalanced classes

2016-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302161#comment-15302161
 ] 

ASF subversion and git services commented on LUCENE-7305:
-

Commit 55d854566e1e3c14cd91d91f414469104c935103 in lucene-solr's branch 
refs/heads/branch_6x from [~teofili]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=55d8545 ]

LUCENE-7305 - use macro average in confusion matrix metrics, removed unused 
import in datasplitter
(cherry picked from commit dc50b79)


> Use macro average in confusion matrix metrics to normalize imbalanced classes
> -
>
> Key: LUCENE-7305
> URL: https://issues.apache.org/jira/browse/LUCENE-7305
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 6.1
>
>
> {{ConfusionMatrix}} multi class measures should be based on macro average to 
> avoid bias (for the good or the bad) from imbalanced classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9141) ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet and single shard

2016-05-26 Thread James Dyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

James Dyer updated SOLR-9141:
-
Attachment: SOLR-9141.patch

Here's a final patch ([^SOLR-9141.patch]) that I will try and commit & 
back-port later today.

bq. for consistency could use ((Number)bucket.get("count")).longValue();
of course.

bq.  nit but it probably makes sense to pull the numWorkers logic out into a 
method so it doesn't have to be adjusted in every test.
got it.



> ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet 
> and single shard
> --
>
> Key: SOLR-9141
> URL: https://issues.apache.org/jira/browse/SOLR-9141
> Project: Solr
>  Issue Type: Bug
>  Components: Parallell SQL
>Affects Versions: 6.0
>Reporter: Minoru Osuka
>Assignee: Joel Bernstein
> Attachments: SOLR-9141-test.patch, SOLR-9141-test.patch, 
> SOLR-9141.patch, SOLR-9141.patch
>
>
> ClassCastException occurs in /sql request handler using -ORDER BY- GROUP BY 
> clause.
> {noformat}
> $ curl --data-urlencode "stmt=select count(*) from access_log" 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"count(*)":1309},
> {"EOF":true,"RESPONSE_TIME":239}]}}
> $ curl --data-urlencode 'stmt=select response, count(*) as count from 
> access_log group by response' 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"EXCEPTION":"java.lang.ClassCastException: java.lang.Integer cannot be cast 
> to java.lang.Long","EOF":true,"RESPONSE_TIME":53}]}}
> {noformat}
> See following error messages:
> {noformat}
> 2016-05-19 10:18:06.477 ERROR (qtp1791930789-21) [c:access_log s:shard1 
> r:core_node1 x:access_log_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream 
> java.io.IOException: java.lang.ClassCastException: java.lang.Integer cannot 
> be cast to java.lang.Long
> at 
> org.apache.solr.client.solrj.io.stream.FacetStream.open(FacetStream.java:300)
> at 
> org.apache.solr.handler.SQLHandler$LimitStream.open(SQLHandler.java:1265)
> at 
> org.apache.solr.client.solrj.io.stream.SelectStream.open(SelectStream.java:153)
> at 
> org.apache.solr.handler.SQLHandler$MetadataStream.open(SQLHandler.java:1511)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:47)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:362)
> at 
> org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:301)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:167)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:725)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:469)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:308)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:262)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> 

Re: [VOTE] Release Lucene/Solr 6.0.1 RC2

2016-05-26 Thread Adrien Grand
+1 SUCCESS! [0:55:13.784752]

Le jeu. 26 mai 2016 à 08:49, Tomás Fernández Löbbe 
a écrit :

> +1
> SUCCESS! [1:13:52.067157]
>
> On Wed, May 25, 2016 at 2:52 PM, Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> Thanks Steve.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>> On Wed, May 25, 2016 at 4:45 PM, Steve Rowe  wrote:
>>
>>>
>>> > On May 25, 2016, at 11:27 AM, David Smiley 
>>> wrote:
>>> >
>>> > The problem I had was that I was on branch_6x not the release branch.
>>> I thought it'd be good enough but apparently not.
>>> >
>>> > On Wed, May 25, 2016 at 9:13 AM Steve Rowe  wrote:
>>> >
>>> >> On May 25, 2016, at 8:46 AM, Michael McCandless <
>>> luc...@mikemccandless.com> wrote:
>>> >>
>>> >> David did you use master's smoke tester?
>>> >>
>>> >> You must use the version on 6.0.x.
>>> >
>>> > I think it should be possible to verify that users are running the
>>> appropriate version of the smoke tester - I’ll take a look.
>>>
>>> I committed a fix: the smoke tester will now fail when run against an
>>> incompatible release, i.e. one with a different major.minor version.
>>>
>>> --
>>> Steve
>>> www.lucidworks.com
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>
>


[jira] [Resolved] (LUCENE-6763) Make MultiPhraseQuery immutable

2016-05-26 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6763.
--
Resolution: Duplicate

> Make MultiPhraseQuery immutable
> ---
>
> Key: LUCENE-6763
> URL: https://issues.apache.org/jira/browse/LUCENE-6763
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
>Priority: Minor
>
> We should make MultiPhraseQuery immutable similarly to PhraseQuery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7303) Avoid NPE if classField doesn't exist in SNBC

2016-05-26 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili resolved LUCENE-7303.
-
Resolution: Fixed

> Avoid NPE if classField doesn't exist in SNBC
> -
>
> Key: LUCENE-7303
> URL: https://issues.apache.org/jira/browse/LUCENE-7303
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
>Priority: Minor
> Fix For: 6.1
>
>
> {{SimpleNaiveBayesClassifier}} uses _MultiFields.getTerms(leafReader, 
> classFieldName)._ but doesn't check if the resulting _Terms_ is null or not.
> While that is unlikely to happen (it doesn't make much sense to use a 
> classifier without specifying an existing class field), it may happen during 
> testing and therefore better to avoid throwing a NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7303) Avoid NPE if classField doesn't exist in SNBC

2016-05-26 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-7303:

Priority: Minor  (was: Major)

> Avoid NPE if classField doesn't exist in SNBC
> -
>
> Key: LUCENE-7303
> URL: https://issues.apache.org/jira/browse/LUCENE-7303
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
>Priority: Minor
> Fix For: 6.1
>
>
> {{SimpleNaiveBayesClassifier}} uses _MultiFields.getTerms(leafReader, 
> classFieldName)._ but doesn't check if the resulting _Terms_ is null or not.
> While that is unlikely to happen (it doesn't make much sense to use a 
> classifier without specifying an existing class field), it may happen during 
> testing and therefore better to avoid throwing a NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7303) Avoid NPE if classField doesn't exist in SNBC

2016-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302125#comment-15302125
 ] 

ASF subversion and git services commented on LUCENE-7303:
-

Commit 8808cf5373522f37bce509729b0b3a7fc9bcbd88 in lucene-solr's branch 
refs/heads/master from [~teofili]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8808cf5 ]

LUCENE-7303 - avoid NPE in MultiFields.getTerms(leafReader, classFieldName), 
removed duplicated code in DocumentSNBC


> Avoid NPE if classField doesn't exist in SNBC
> -
>
> Key: LUCENE-7303
> URL: https://issues.apache.org/jira/browse/LUCENE-7303
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 6.1
>
>
> {{SimpleNaiveBayesClassifier}} uses _MultiFields.getTerms(leafReader, 
> classFieldName)._ but doesn't check if the resulting _Terms_ is null or not.
> While that is unlikely to happen (it doesn't make much sense to use a 
> classifier without specifying an existing class field), it may happen during 
> testing and therefore better to avoid throwing a NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7303) Avoid NPE if classField doesn't exist in SNBC

2016-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302136#comment-15302136
 ] 

ASF subversion and git services commented on LUCENE-7303:
-

Commit 8c6493151738314420ce5ffb678dbb9170c64d9a in lucene-solr's branch 
refs/heads/branch_6x from [~teofili]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8c64931 ]

LUCENE-7303 - avoid NPE in MultiFields.getTerms(leafReader, classFieldName), 
removed duplicated code in DocumentSNBC
(cherry picked from commit 8808cf5)


> Avoid NPE if classField doesn't exist in SNBC
> -
>
> Key: LUCENE-7303
> URL: https://issues.apache.org/jira/browse/LUCENE-7303
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 6.1
>
>
> {{SimpleNaiveBayesClassifier}} uses _MultiFields.getTerms(leafReader, 
> classFieldName)._ but doesn't check if the resulting _Terms_ is null or not.
> While that is unlikely to happen (it doesn't make much sense to use a 
> classifier without specifying an existing class field), it may happen during 
> testing and therefore better to avoid throwing a NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7305) Use macro average in confusion matrix metrics to normalize imbalanced classes

2016-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302126#comment-15302126
 ] 

ASF subversion and git services commented on LUCENE-7305:
-

Commit dc50b79a146d95b8dd6d68523adfcedb2440a0e2 in lucene-solr's branch 
refs/heads/master from [~teofili]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=dc50b79 ]

LUCENE-7305 - use macro average in confusion matrix metrics, removed unused 
import in datasplitter


> Use macro average in confusion matrix metrics to normalize imbalanced classes
> -
>
> Key: LUCENE-7305
> URL: https://issues.apache.org/jira/browse/LUCENE-7305
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 6.1
>
>
> {{ConfusionMatrix}} multi class measures should be based on macro average to 
> avoid bias (for the good or the bad) from imbalanced classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7305) Use macro average in confusion matrix metrics to normalize imbalanced classes

2016-05-26 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created LUCENE-7305:
---

 Summary: Use macro average in confusion matrix metrics to 
normalize imbalanced classes
 Key: LUCENE-7305
 URL: https://issues.apache.org/jira/browse/LUCENE-7305
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/classification
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 6.1


{{ConfusionMatrix}} multi class measures should be based on macro average to 
avoid bias (for the good or the bad) from imbalanced classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7304) Doc values based block join implementation

2016-05-26 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen updated LUCENE-7304:
--
Attachment: LUCENE_7304.patch

Attached a working version of a doc values based block join query. 
The app storing docs is responsible for adding the numeric doc values field 
with the right offsets.

> Doc values based block join implementation
> --
>
> Key: LUCENE-7304
> URL: https://issues.apache.org/jira/browse/LUCENE-7304
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Martijn van Groningen
>Priority: Minor
> Attachments: LUCENE_7304.patch
>
>
> At query time the block join relies on a bitset for finding the previous 
> parent doc during advancing the doc id iterator. On large indices these 
> bitsets can consume large amounts of jvm heap space.  Also typically due the 
> nature how these bitsets are set, the 'FixedBitSet' implementation is used.
> The idea I had was to replace the bitset usage by a numeric doc values field 
> that stores offsets. Each child doc stores how many docids it is from its 
> parent doc and each parent stores how many docids it is apart from its first 
> child. At query time this information can be used to perform the block join.
> I think another benefit of this approach is that external tools can now 
> easily determine if a doc is part of a block of documents and perhaps this 
> also helps index time sorting?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7304) Doc values based block join implementation

2016-05-26 Thread Martijn van Groningen (JIRA)
Martijn van Groningen created LUCENE-7304:
-

 Summary: Doc values based block join implementation
 Key: LUCENE-7304
 URL: https://issues.apache.org/jira/browse/LUCENE-7304
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Martijn van Groningen
Priority: Minor


At query time the block join relies on a bitset for finding the previous parent 
doc during advancing the doc id iterator. On large indices these bitsets can 
consume large amounts of jvm heap space.  Also typically due the nature how 
these bitsets are set, the 'FixedBitSet' implementation is used.

The idea I had was to replace the bitset usage by a numeric doc values field 
that stores offsets. Each child doc stores how many docids it is from its 
parent doc and each parent stores how many docids it is apart from its first 
child. At query time this information can be used to perform the block join.

I think another benefit of this approach is that external tools can now easily 
determine if a doc is part of a block of documents and perhaps this also helps 
index time sorting?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7303) Avoid NPE if classField doesn't exist in SNBC

2016-05-26 Thread Tommaso Teofili (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tommaso Teofili updated LUCENE-7303:

Description: 
{{SimpleNaiveBayesClassifier}} uses _MultiFields.getTerms(leafReader, 
classFieldName)._ but doesn't check if the resulting _Terms_ is null or not.
While that is unlikely to happen (it doesn't make much sense to use a 
classifier without specifying an existing class field), it may happen during 
testing and therefore better to avoid throwing a NPE.

  was:{{SimpleNaiveBayesClassifier}} uses _MultiFields.getTerms(leafReader, 
classFieldName)._ but doesn't check if the resulting _Terms_ is null or not.


> Avoid NPE if classField doesn't exist in SNBC
> -
>
> Key: LUCENE-7303
> URL: https://issues.apache.org/jira/browse/LUCENE-7303
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/classification
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
> Fix For: 6.1
>
>
> {{SimpleNaiveBayesClassifier}} uses _MultiFields.getTerms(leafReader, 
> classFieldName)._ but doesn't check if the resulting _Terms_ is null or not.
> While that is unlikely to happen (it doesn't make much sense to use a 
> classifier without specifying an existing class field), it may happen during 
> testing and therefore better to avoid throwing a NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7303) Avoid NPE if classField doesn't exist in SNBC

2016-05-26 Thread Tommaso Teofili (JIRA)
Tommaso Teofili created LUCENE-7303:
---

 Summary: Avoid NPE if classField doesn't exist in SNBC
 Key: LUCENE-7303
 URL: https://issues.apache.org/jira/browse/LUCENE-7303
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/classification
Reporter: Tommaso Teofili
Assignee: Tommaso Teofili
 Fix For: 6.1


{{SimpleNaiveBayesClassifier}} uses _MultiFields.getTerms(leafReader, 
classFieldName)._ but doesn't check if the resulting _Terms_ is null or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9141) ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet and single shard

2016-05-26 Thread Minoru Osuka (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302079#comment-15302079
 ] 

Minoru Osuka commented on SOLR-9141:


​[~joel.bernstein], +1. ((Number)bucket.get("count")).longValue(); seems faster 
than my patch (on my Macbook).

{noformat}
package test;

import java.util.ArrayList;
import java.util.List;

public class CastTest {
public static void main(String[] args) {
List objList = new ArrayList();
for (int i = 0; i < 1; i++) {
objList.add(new Integer(i));
}

long start = System.nanoTime();
for (Object obj : objList) {
long l = new Long(obj.toString()).longValue();
}
long end = System.nanoTime();
System.out.println(String.format("new 
Long(obj.toString()).longValue(); : %1$,10d ns", (end - start)));

start = System.nanoTime();
for (Object obj : objList) {
long l = ((Number)obj).longValue();
}
end = System.nanoTime();
System.out.println(String.format("((Number)obj).longValue();
: %1$,10d ns", (end - start)));
}
}
{noformat}

{noformat}
new Long(obj.toString()).longValue(); : 11,301,332 ns
((Number)obj).longValue();:831,366 ns
{noformat}


> ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet 
> and single shard
> --
>
> Key: SOLR-9141
> URL: https://issues.apache.org/jira/browse/SOLR-9141
> Project: Solr
>  Issue Type: Bug
>  Components: Parallell SQL
>Affects Versions: 6.0
>Reporter: Minoru Osuka
>Assignee: Joel Bernstein
> Attachments: SOLR-9141-test.patch, SOLR-9141-test.patch, 
> SOLR-9141.patch
>
>
> ClassCastException occurs in /sql request handler using -ORDER BY- GROUP BY 
> clause.
> {noformat}
> $ curl --data-urlencode "stmt=select count(*) from access_log" 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"count(*)":1309},
> {"EOF":true,"RESPONSE_TIME":239}]}}
> $ curl --data-urlencode 'stmt=select response, count(*) as count from 
> access_log group by response' 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"EXCEPTION":"java.lang.ClassCastException: java.lang.Integer cannot be cast 
> to java.lang.Long","EOF":true,"RESPONSE_TIME":53}]}}
> {noformat}
> See following error messages:
> {noformat}
> 2016-05-19 10:18:06.477 ERROR (qtp1791930789-21) [c:access_log s:shard1 
> r:core_node1 x:access_log_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream 
> java.io.IOException: java.lang.ClassCastException: java.lang.Integer cannot 
> be cast to java.lang.Long
> at 
> org.apache.solr.client.solrj.io.stream.FacetStream.open(FacetStream.java:300)
> at 
> org.apache.solr.handler.SQLHandler$LimitStream.open(SQLHandler.java:1265)
> at 
> org.apache.solr.client.solrj.io.stream.SelectStream.open(SelectStream.java:153)
> at 
> org.apache.solr.handler.SQLHandler$MetadataStream.open(SQLHandler.java:1511)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:47)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:362)
> at 
> org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:301)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:167)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:725)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:469)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:308)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:262)
> at 
> 

[jira] [Updated] (SOLR-9120) Luke NoSuchFileException

2016-05-26 Thread Markus Jelsma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Markus Jelsma updated SOLR-9120:

Fix Version/s: 6.1

> Luke NoSuchFileException
> 
>
> Key: SOLR-9120
> URL: https://issues.apache.org/jira/browse/SOLR-9120
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Markus Jelsma
> Fix For: 6.1, master (7.0)
>
>
> On Solr 6.0, we frequently see the following errors popping up:
> {code}
> java.nio.file.NoSuchFileException: 
> /var/lib/solr/logs_shard2_replica1/data/index/segments_2c5
>   at 
> sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
>   at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
>   at 
> sun.nio.fs.UnixFileAttributeViews$Basic.readAttributes(UnixFileAttributeViews.java:55)
>   at 
> sun.nio.fs.UnixFileSystemProvider.readAttributes(UnixFileSystemProvider.java:144)
>   at 
> sun.nio.fs.LinuxFileSystemProvider.readAttributes(LinuxFileSystemProvider.java:99)
>   at java.nio.file.Files.readAttributes(Files.java:1737)
>   at java.nio.file.Files.size(Files.java:2332)
>   at org.apache.lucene.store.FSDirectory.fileLength(FSDirectory.java:243)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.fileLength(NRTCachingDirectory.java:131)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getFileLength(LukeRequestHandler.java:597)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.getIndexInfo(LukeRequestHandler.java:585)
>   at 
> org.apache.solr.handler.admin.LukeRequestHandler.handleRequestBody(LukeRequestHandler.java:137)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:518)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
>   at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2016-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302013#comment-15302013
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit 5df4ca1ebdf33508c16be9c00db622d3ec7fe2ec in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5df4ca1 ]

SOLR-8029: Added the missing paths in schema


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8029) Modernize and standardize Solr APIs

2016-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8029?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15302012#comment-15302012
 ] 

ASF subversion and git services commented on SOLR-8029:
---

Commit c07a0bfe8e47f068a00a979c5e0f09c206ced688 in lucene-solr's branch 
refs/heads/apiv2 from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c07a0bf ]

SOLR-8029: wrong doc


> Modernize and standardize Solr APIs
> ---
>
> Key: SOLR-8029
> URL: https://issues.apache.org/jira/browse/SOLR-8029
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 6.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>  Labels: API, EaseOfUse
> Fix For: 6.0
>
> Attachments: SOLR-8029.patch, SOLR-8029.patch, SOLR-8029.patch, 
> SOLR-8029.patch
>
>
> Solr APIs have organically evolved and they are sometimes inconsistent with 
> each other or not in sync with the widely followed conventions of HTTP 
> protocol. Trying to make incremental changes to make them modern is like 
> applying band-aid. So, we have done a complete rethink of what the APIs 
> should be. The most notable aspects of the API are as follows:
> The new set of APIs will be placed under a new path {{/solr2}}. The legacy 
> APIs will continue to work under the {{/solr}} path as they used to and they 
> will be eventually deprecated.
> There are 4 types of requests in the new API 
> * {{/v2//*}} : Hit a collection directly or manage 
> collections/shards/replicas 
> * {{/v2//*}} : Hit a core directly or manage cores 
> * {{/v2/cluster/*}} : Operations on cluster not pertaining to any collection 
> or core. e.g: security, overseer ops etc
> This will be released as part of a major release. Check the link given below 
> for the full specification.  Your comments are welcome
> [Solr API version 2 Specification | http://bit.ly/1JYsBMQ]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5944) Support updates of numeric DocValues

2016-05-26 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301985#comment-15301985
 ] 

Ishan Chattopadhyaya commented on SOLR-5944:


bq. (I'm pretty sure) I was able to reproduce the root cause of the randomized 
failures in LUCENE-7301.
Thanks Hoss for beating me to it!

{quote}
testReplay5 - still uses "inc" for doc id=0, but uses "set" for every other doc 
in the index

this currently fails with an NPE in 
AtomicUpdateDocumentMerger.doInPlaceUpdateMerge(AtomicUpdateDocumentMerger.java:283)
{quote}
I think the problem there is that a "set" operation was attempted at a document 
that still doesn't exist in the index. I think such an operation works with 
atomic updates, but the underlying docValues API doesn't support updates of dv 
fields that don't exist yet. I will try to handle this better, instead of 
throwing NPE.

I shall work on fixing your review comments regarding the tests, and increase 
their scope as you suggest. My idea behind the tests were (and naming could be 
improved): TestInPlaceUpdate just tests some basic cases in non-cloud mode, 
TestStressInPlaceUpdates tests lots of documents, lots of updates, lots of 
threads and cloud mode, InPlaceUpdateDistribTest for some basic 
operations/scenarios in cloud (including testing if same document was updated, 
or a new one was created). I was thinking that if we can get past the DV 
updates flushing issue (LUCENE-7301), we can focus well on improving scope of 
tests more. Thanks for your review!

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: DUP.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.failures.tar.gz, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.62D328FA1DEA57FD.fail2.txt, 
> hoss.62D328FA1DEA57FD.fail3.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8583) Apply highlighting to hl.alternateField

2016-05-26 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8583:
--
Attachment: SOLR-8583.patch

The reason for difference in length is probably in how the fragmenters work. 
Tried out some more sizes and it changes, although not at the limits I expected 
for Simple highlighter.

Here's a new patch using only {{FRAGSIZE}} in limiting maxAlternateFieldLength, 
instead of also using {{MAX_CHARS}}, as it did not add any value.

> Apply highlighting to hl.alternateField
> ---
>
> Key: SOLR-8583
> URL: https://issues.apache.org/jira/browse/SOLR-8583
> Project: Solr
>  Issue Type: Improvement
>  Components: highlighter
>Affects Versions: 5.4
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.1
>
> Attachments: SOLR-8583.patch, SOLR-8583.patch, SOLR-8583.patch, 
> SOLR-8583.patch
>
>
> Today, you can configure hl.alternateField for highlighter to display if no 
> snippets were produced from original field. But the contents of the fallback 
> field is output without highlighting the original query terms.
> This issue will cause alternate field to be highlighted with no snippet 
> generation, and still respect max length. You can turn it off using new param 
> {{hl.highlightAlternate=false}}. Supported highlighters: Simple, FVH



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-8583) Apply highlighting to hl.alternateField

2016-05-26 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-8583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-8583:
--
Attachment: SOLR-8583.patch

New patch incorporating [~dsmiley]'s comments more or less:

* Went back to swapping {{req.params}} with a wrapDefaults version.
* New method {{doHighlightingOfField()}} which gets rid of duplication of code
* Lazy FVH init by passing around new inner class FvhContainer with members 
{{fvh}} and {{fieldQuery}} which can then be altered by methods
* Moved highlighting of alternate into method {{alternateField()}} to gather 
all logic in same place
* FIeldname loop now looks like this: {code}// Highlight per-field
for (String fieldName : fieldNames) {
  SchemaField schemaField = schema.getFieldOrNull(fieldName);

  Object fieldHighlights; // object type allows flexibility for subclassers
  fieldHighlights = doHighlightingOfField(schemaField, params, fvhContainer, 
doc, docId, query, reader, req);

  if (fieldHighlights == null) {
fieldHighlights = alternateField(doc, fieldName, req, docId, query, reader, 
schema, fvhContainer);
  }

  if (fieldHighlights != null) {
docHighlights.add(fieldName, fieldHighlights);
  }
} // for each field
{code}

What puzzles me is that the changes should be pure code structure, no 
functionality change, yet one of the tests started failing. It was the first 
test of {{testAlternateSummaryWithHighlighting()}} setting 
maxAlternateFieldLength=18. Earlier it returned 
{{keyword is only here}}, but with the last patch I had 
to change it into {{keyword is only}}.

Currently I'm not able to debug tests in my IntelliJ 16, so I just changed the 
assert instead of digging further.

> Apply highlighting to hl.alternateField
> ---
>
> Key: SOLR-8583
> URL: https://issues.apache.org/jira/browse/SOLR-8583
> Project: Solr
>  Issue Type: Improvement
>  Components: highlighter
>Affects Versions: 5.4
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.1
>
> Attachments: SOLR-8583.patch, SOLR-8583.patch, SOLR-8583.patch
>
>
> Today, you can configure hl.alternateField for highlighter to display if no 
> snippets were produced from original field. But the contents of the fallback 
> field is output without highlighting the original query terms.
> This issue will cause alternate field to be highlighted with no snippet 
> generation, and still respect max length. You can turn it off using new param 
> {{hl.highlightAlternate=false}}. Supported highlighters: Simple, FVH



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build # 206 - Failure!

2016-05-26 Thread Simon Willnauer
pushed a fix - sorry for the noise

On Thu, May 26, 2016 at 12:03 PM, Policeman Jenkins Server
 wrote:
> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/206/
> Java: 32bit/jdk1.8.0_92 -server -XX:+UseG1GC
>
> 2 tests failed.
> FAILED:  
> junit.framework.TestSuite.org.apache.lucene.store.TestHardLinkCopyDirectoryWrapper
>
> Error Message:
> Resource in scope SUITE failed to close. Resource was registered from thread 
> Thread[id=35, 
> name=TEST-TestHardLinkCopyDirectoryWrapper.testCopyHardLinks-seed#[B160A9C164A284D],
>  state=RUNNABLE, group=TGRP-TestHardLinkCopyDirectoryWrapper], registration 
> stack trace below.
>
> Stack Trace:
> com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope 
> SUITE failed to close. Resource was registered from thread Thread[id=35, 
> name=TEST-TestHardLinkCopyDirectoryWrapper.testCopyHardLinks-seed#[B160A9C164A284D],
>  state=RUNNABLE, group=TGRP-TestHardLinkCopyDirectoryWrapper], registration 
> stack trace below.
> at java.lang.Thread.getStackTrace(Thread.java:1552)
> at 
> com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:173)
> at 
> org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:749)
> at 
> org.apache.lucene.util.LuceneTestCase.wrapDirectory(LuceneTestCase.java:1429)
> at 
> org.apache.lucene.util.LuceneTestCase.newFSDirectory(LuceneTestCase.java:1393)
> at 
> org.apache.lucene.util.LuceneTestCase.newFSDirectory(LuceneTestCase.java:1373)
> at 
> org.apache.lucene.util.LuceneTestCase.newFSDirectory(LuceneTestCase.java:1360)
> at 
> org.apache.lucene.store.TestHardLinkCopyDirectoryWrapper.testCopyHardLinks(TestHardLinkCopyDirectoryWrapper.java:46)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
> at 
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at 
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at 
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
> at 
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
> at 
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
> at 
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at 
> 

[jira] [Updated] (SOLR-9164) Want parameterised field delimiter to JsonRecordReader

2016-05-26 Thread Petri Pyy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Petri Pyy updated SOLR-9164:

Attachment: 0001-SOLR-9164-Want-parameterised-field-delimiter-to-Json.patch

> Want parameterised field delimiter to JsonRecordReader
> --
>
> Key: SOLR-9164
> URL: https://issues.apache.org/jira/browse/SOLR-9164
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 5.6, 6.1, master (7.0)
>Reporter: Petri Pyy
>Priority: Minor
> Attachments: 
> 0001-SOLR-9164-Want-parameterised-field-delimiter-to-Json.patch
>
>
> We have a case where structured json input data contains a lot of periods in 
> field names so the resulting document field names with f=$FQN:/** comes 
> somewhat ugly. It would be nice to have a option to give the delimiter as a 
> parameter (ie. fdelim=/) to override the currently hardcoded delimiter 
> (period).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_92) - Build # 206 - Failure!

2016-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/206/
Java: 32bit/jdk1.8.0_92 -server -XX:+UseG1GC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestHardLinkCopyDirectoryWrapper

Error Message:
Resource in scope SUITE failed to close. Resource was registered from thread 
Thread[id=35, 
name=TEST-TestHardLinkCopyDirectoryWrapper.testCopyHardLinks-seed#[B160A9C164A284D],
 state=RUNNABLE, group=TGRP-TestHardLinkCopyDirectoryWrapper], registration 
stack trace below.

Stack Trace:
com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope 
SUITE failed to close. Resource was registered from thread Thread[id=35, 
name=TEST-TestHardLinkCopyDirectoryWrapper.testCopyHardLinks-seed#[B160A9C164A284D],
 state=RUNNABLE, group=TGRP-TestHardLinkCopyDirectoryWrapper], registration 
stack trace below.
at java.lang.Thread.getStackTrace(Thread.java:1552)
at 
com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:173)
at 
org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:749)
at 
org.apache.lucene.util.LuceneTestCase.wrapDirectory(LuceneTestCase.java:1429)
at 
org.apache.lucene.util.LuceneTestCase.newFSDirectory(LuceneTestCase.java:1393)
at 
org.apache.lucene.util.LuceneTestCase.newFSDirectory(LuceneTestCase.java:1373)
at 
org.apache.lucene.util.LuceneTestCase.newFSDirectory(LuceneTestCase.java:1360)
at 
org.apache.lucene.store.TestHardLinkCopyDirectoryWrapper.testCopyHardLinks(TestHardLinkCopyDirectoryWrapper.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Created] (LUCENE-7302) IndexWriter should tell you the order of indexing operations

2016-05-26 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-7302:
--

 Summary: IndexWriter should tell you the order of indexing 
operations
 Key: LUCENE-7302
 URL: https://issues.apache.org/jira/browse/LUCENE-7302
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 6.1, master (7.0)


Today, when you use multiple threads to concurrently index, Lucene
knows the effective order that those operations were applied to the
index, but doesn't return that information back to you.

But this is important to know, if you want to build a reliable search
API on top of Lucene.  Combined with the recently added NRT
replication (LUCENE-5438) it can be a strong basis for an efficient
distributed search API.

I think we should return this information, since we already have it,
and since it could simplify servers (ES/Solr) on top of Lucene:

  - They would not require locking preventing the same id from being
indexed concurrently since they could instead check the returned
sequence number to know which update "won", for features like
"realtime get".  (Locking is probably still needed for features
like optimistic concurrency).

  - When re-applying operations from a prior commit point, e.g. on
recovering after a crash from a transaction log, they can know
exactly which operations made it into the commit and which did
not, and replay only the truly missing operations.

Not returning this just hurts people who try to build servers on top
with clear semantics on crashing/recovering ... I also struggled with
this when building a simple "server wrapper" on top of Lucene
(LUCENE-5376).




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_92) - Build # 5868 - Still Failing!

2016-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/5868/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestHardLinkCopyDirectoryWrapper

Error Message:
Resource in scope SUITE failed to close. Resource was registered from thread 
Thread[id=17, 
name=TEST-TestHardLinkCopyDirectoryWrapper.testCopyHardLinks-seed#[49829EF79F499D5B],
 state=RUNNABLE, group=TGRP-TestHardLinkCopyDirectoryWrapper], registration 
stack trace below.

Stack Trace:
com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope 
SUITE failed to close. Resource was registered from thread Thread[id=17, 
name=TEST-TestHardLinkCopyDirectoryWrapper.testCopyHardLinks-seed#[49829EF79F499D5B],
 state=RUNNABLE, group=TGRP-TestHardLinkCopyDirectoryWrapper], registration 
stack trace below.
at java.lang.Thread.getStackTrace(Thread.java:1552)
at 
com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:173)
at 
org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:749)
at 
org.apache.lucene.util.LuceneTestCase.wrapDirectory(LuceneTestCase.java:1429)
at 
org.apache.lucene.util.LuceneTestCase.newFSDirectory(LuceneTestCase.java:1393)
at 
org.apache.lucene.util.LuceneTestCase.newFSDirectory(LuceneTestCase.java:1373)
at 
org.apache.lucene.util.LuceneTestCase.newFSDirectory(LuceneTestCase.java:1360)
at 
org.apache.lucene.store.TestHardLinkCopyDirectoryWrapper.testCopyHardLinks(TestHardLinkCopyDirectoryWrapper.java:46)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-6.0-Linux (64bit/jdk1.8.0_92) - Build # 186 - Failure!

2016-05-26 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-6.0-Linux/186/
Java: 64bit/jdk1.8.0_92 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Expected to find shardAddress in the up shard info

Stack Trace:
java.lang.AssertionError: Expected to find shardAddress in the up shard info
at 
__randomizedtesting.SeedInfo.seed([BA68A0153E0B5158:323C9FCF90F73CA0]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.TestDistributedSearch.comparePartialResponses(TestDistributedSearch.java:1160)
at 
org.apache.solr.TestDistributedSearch.queryPartialResults(TestDistributedSearch.java:1101)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:961)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1764)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:871)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:921)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1018)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:809)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:460)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:816)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-8776) Support RankQuery in grouping

2016-05-26 Thread Ahmet Anil Pala (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301877#comment-15301877
 ] 

Ahmet Anil Pala commented on SOLR-8776:
---

Hi,

Which branch is this patch compatible with? I've tried branch_6x and 
branch_6_0. Although, the patch was successfully applied, it prevented the 
source from compiling.

> Support RankQuery in grouping
> -
>
> Key: SOLR-8776
> URL: https://issues.apache.org/jira/browse/SOLR-8776
> Project: Solr
>  Issue Type: Improvement
>  Components: search
>Affects Versions: 6.0
>Reporter: Diego Ceccarelli
>Priority: Minor
> Fix For: 6.0
>
> Attachments: 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch, 
> 0001-SOLR-8776-Support-RankQuery-in-grouping.patch
>
>
> Currently it is not possible to use RankQuery [1] and Grouping [2] together 
> (see also [3]). In some situations Grouping can be replaced by Collapse and 
> Expand Results [4] (that supports reranking), but i) collapse cannot 
> guarantee that at least a minimum number of groups will be returned for a 
> query, and ii) in the Solr Cloud setting you will have constraints on how to 
> partition the documents among the shards.
> I'm going to start working on supporting RankQuery in grouping. I'll start 
> attaching a patch with a test that fails because grouping does not support 
> the rank query and then I'll try to fix the problem, starting from the non 
> distributed setting (GroupingSearch).
> My feeling is that since grouping is mostly performed by Lucene, RankQuery 
> should be refactored and moved (or partially moved) there. 
> Any feedback is welcome.
> [1] https://cwiki.apache.org/confluence/display/solr/RankQuery+API 
> [2] https://cwiki.apache.org/confluence/display/solr/Result+Grouping
> [3] 
> http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201507.mbox/%3ccahm-lpuvspest-sw63_8a6gt-wor6ds_t_nb2rope93e4+s...@mail.gmail.com%3E
> [4] 
> https://cwiki.apache.org/confluence/display/solr/Collapse+and+Expand+Results



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8593) Integrate Apache Calcite into the SQLHandler

2016-05-26 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301871#comment-15301871
 ] 

Joel Bernstein commented on SOLR-8593:
--

Interesting. I'd like to spend some time in the next couple weeks to see what 
we can done in the near term on this ticket. Possibly, the first step is just 
to release equivalent functionality to the current Presto code, with a Calcite 
release. This would provide the base to gradually expand the SQL feature set.

> Integrate Apache Calcite into the SQLHandler
> 
>
> Key: SOLR-8593
> URL: https://issues.apache.org/jira/browse/SOLR-8593
> Project: Solr
>  Issue Type: Improvement
>Reporter: Joel Bernstein
>
> The Presto SQL Parser was perfect for phase one of the SQLHandler. It was 
> nicely split off from the larger Presto project and it did everything that 
> was needed for the initial implementation.
> Phase two of the SQL work though will require an optimizer. Here is where 
> Apache Calcite comes into play. It has a battle tested cost based optimizer 
> and has been integrated into Apache Drill and Hive.
> This work can begin in trunk following the 6.0 release. The final query plans 
> will continue to be translated to Streaming API objects (TupleStreams), so 
> continued work on the JDBC driver should plug in nicely with the Calcite work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9141) ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet and single shard

2016-05-26 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301782#comment-15301782
 ] 

Joel Bernstein edited comment on SOLR-9141 at 5/26/16 9:07 AM:
---

[~jdyer], +1 on committing. I like the 
((Number)bucket.get("count")).longValue(); approach as well.






was (Author: joel.bernstein):
[~jdyer], +1 on the committing. I like the 
((Number)bucket.get("count")).longValue(); approach as well.





> ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet 
> and single shard
> --
>
> Key: SOLR-9141
> URL: https://issues.apache.org/jira/browse/SOLR-9141
> Project: Solr
>  Issue Type: Bug
>  Components: Parallell SQL
>Affects Versions: 6.0
>Reporter: Minoru Osuka
>Assignee: Joel Bernstein
> Attachments: SOLR-9141-test.patch, SOLR-9141-test.patch, 
> SOLR-9141.patch
>
>
> ClassCastException occurs in /sql request handler using -ORDER BY- GROUP BY 
> clause.
> {noformat}
> $ curl --data-urlencode "stmt=select count(*) from access_log" 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"count(*)":1309},
> {"EOF":true,"RESPONSE_TIME":239}]}}
> $ curl --data-urlencode 'stmt=select response, count(*) as count from 
> access_log group by response' 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"EXCEPTION":"java.lang.ClassCastException: java.lang.Integer cannot be cast 
> to java.lang.Long","EOF":true,"RESPONSE_TIME":53}]}}
> {noformat}
> See following error messages:
> {noformat}
> 2016-05-19 10:18:06.477 ERROR (qtp1791930789-21) [c:access_log s:shard1 
> r:core_node1 x:access_log_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream 
> java.io.IOException: java.lang.ClassCastException: java.lang.Integer cannot 
> be cast to java.lang.Long
> at 
> org.apache.solr.client.solrj.io.stream.FacetStream.open(FacetStream.java:300)
> at 
> org.apache.solr.handler.SQLHandler$LimitStream.open(SQLHandler.java:1265)
> at 
> org.apache.solr.client.solrj.io.stream.SelectStream.open(SelectStream.java:153)
> at 
> org.apache.solr.handler.SQLHandler$MetadataStream.open(SQLHandler.java:1511)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:47)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:362)
> at 
> org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:301)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:167)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:725)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:469)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:308)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:262)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> 

[jira] [Commented] (SOLR-9141) ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet and single shard

2016-05-26 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301782#comment-15301782
 ] 

Joel Bernstein commented on SOLR-9141:
--

[~jdyer], +1 on the committing. I like the 
((Number)bucket.get("count")).longValue(); approach as well.





> ClassCastException occurs in /sql handler with GROUP BY aggregationMode=facet 
> and single shard
> --
>
> Key: SOLR-9141
> URL: https://issues.apache.org/jira/browse/SOLR-9141
> Project: Solr
>  Issue Type: Bug
>  Components: Parallell SQL
>Affects Versions: 6.0
>Reporter: Minoru Osuka
>Assignee: Joel Bernstein
> Attachments: SOLR-9141-test.patch, SOLR-9141-test.patch, 
> SOLR-9141.patch
>
>
> ClassCastException occurs in /sql request handler using -ORDER BY- GROUP BY 
> clause.
> {noformat}
> $ curl --data-urlencode "stmt=select count(*) from access_log" 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"count(*)":1309},
> {"EOF":true,"RESPONSE_TIME":239}]}}
> $ curl --data-urlencode 'stmt=select response, count(*) as count from 
> access_log group by response' 
> "http://localhost:8983/solr/access_log/sql?aggregationMode=facet;
> {"result-set":{"docs":[
> {"EXCEPTION":"java.lang.ClassCastException: java.lang.Integer cannot be cast 
> to java.lang.Long","EOF":true,"RESPONSE_TIME":53}]}}
> {noformat}
> See following error messages:
> {noformat}
> 2016-05-19 10:18:06.477 ERROR (qtp1791930789-21) [c:access_log s:shard1 
> r:core_node1 x:access_log_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream 
> java.io.IOException: java.lang.ClassCastException: java.lang.Integer cannot 
> be cast to java.lang.Long
> at 
> org.apache.solr.client.solrj.io.stream.FacetStream.open(FacetStream.java:300)
> at 
> org.apache.solr.handler.SQLHandler$LimitStream.open(SQLHandler.java:1265)
> at 
> org.apache.solr.client.solrj.io.stream.SelectStream.open(SelectStream.java:153)
> at 
> org.apache.solr.handler.SQLHandler$MetadataStream.open(SQLHandler.java:1511)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:47)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:362)
> at 
> org.apache.solr.response.TextResponseWriter.writeTupleStream(TextResponseWriter.java:301)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:167)
> at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:183)
> at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:299)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:95)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:60)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:725)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:469)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1676)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.handle(CrossOriginFilter.java:308)
> at 
> org.eclipse.jetty.servlets.CrossOriginFilter.doFilter(CrossOriginFilter.java:262)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at 

[jira] [Resolved] (LUCENE-7300) Add directory wrapper that optionally uses hardlinks in copyFrom

2016-05-26 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer resolved LUCENE-7300.
-
   Resolution: Fixed
 Assignee: Simon Willnauer
Fix Version/s: master (7.0)

> Add directory wrapper that optionally uses hardlinks in copyFrom
> 
>
> Key: LUCENE-7300
> URL: https://issues.apache.org/jira/browse/LUCENE-7300
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 6.1
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
> Fix For: 6.1, master (7.0)
>
> Attachments: LUCENE-7300.patch, LUCENE-7300.patch
>
>
> Today we always do byte-by-byte copy in Directory#copyFrom. While this is 
> reliable and should be the default, certain situations can be improved by 
> using hardlinks if possible to get constant time copy on OS / FS that support 
> such an operation. Something like this could reside in misc if it's contained 
> enough since it requires LinkPermissions to be set and needs to detect if 
> both directories are subclasses of FSDirectory etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7300) Add directory wrapper that optionally uses hardlinks in copyFrom

2016-05-26 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15301762#comment-15301762
 ] 

ASF subversion and git services commented on LUCENE-7300:
-

Commit a6839beb87a73bff6139df44a7b9168a498dd426 in lucene-solr's branch 
refs/heads/branch_6x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a6839be ]

LUCENE-7300: Add HardLinkCopyDirectoryWrapper to speed up file copying if 
hardlinks are applicable


> Add directory wrapper that optionally uses hardlinks in copyFrom
> 
>
> Key: LUCENE-7300
> URL: https://issues.apache.org/jira/browse/LUCENE-7300
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 6.1
>Reporter: Simon Willnauer
> Fix For: 6.1
>
> Attachments: LUCENE-7300.patch, LUCENE-7300.patch
>
>
> Today we always do byte-by-byte copy in Directory#copyFrom. While this is 
> reliable and should be the default, certain situations can be improved by 
> using hardlinks if possible to get constant time copy on OS / FS that support 
> such an operation. Something like this could reside in misc if it's contained 
> enough since it requires LinkPermissions to be set and needs to detect if 
> both directories are subclasses of FSDirectory etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >