Re: [VOTE] Release Lucene/Solr 4.5.0 RC1

2013-09-23 Thread Adrien Grand
Yonik,

On Tue, Sep 24, 2013 at 1:52 AM, Yonik Seeley  wrote:
> The fix has been committed to the 45 branch.
> Given how much pain was caused the last time the binary format changed
> (the change from modified-UTF8 to normal UTF-8), I think this warrants
> a 4.5 re-spin.

Thanks for fixing this bug Yonik. I will build a new RC asap...

-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0-ea-b106) - Build # 3288 - Failure!

2013-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/3288/
Java: 64bit/jdk1.8.0-ea-b106 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.solr.core.TestImplicitCoreProperties.testImplicitPropertiesAreSubstitutedInSolrConfig

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([A34D6E31F745DEF4:A9576417E6FFB61]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:637)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:604)
at 
org.apache.solr.core.TestImplicitCoreProperties.testImplicitPropertiesAreSubstitutedInSolrConfig(TestImplicitCoreProperties.java:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:491)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.la

[jira] [Created] (LUCENE-5240) additional safety in Tokenizer state machine

2013-09-23 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5240:
---

 Summary: additional safety in Tokenizer state machine
 Key: LUCENE-5240
 URL: https://issues.apache.org/jira/browse/LUCENE-5240
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5240.patch

{code}
   * NOTE: 
   * The default implementation closes the input Reader, so
   * be sure to call super.close() when overriding this method.
   */
  @Override
  public void close() throws IOException {
{code}

We can add a simple check for this easily now in setReader. I found a few bugs, 
and fixed all except TrieTokenizer in solr (I am lost here... somewhere i have 
a patch to remove this thing).


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5240) additional safety in Tokenizer state machine

2013-09-23 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5240:


Attachment: LUCENE-5240.patch

> additional safety in Tokenizer state machine
> 
>
> Key: LUCENE-5240
> URL: https://issues.apache.org/jira/browse/LUCENE-5240
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
> Attachments: LUCENE-5240.patch
>
>
> {code}
>* NOTE: 
>* The default implementation closes the input Reader, so
>* be sure to call super.close() when overriding this method.
>*/
>   @Override
>   public void close() throws IOException {
> {code}
> We can add a simple check for this easily now in setReader. I found a few 
> bugs, and fixed all except TrieTokenizer in solr (I am lost here... somewhere 
> i have a patch to remove this thing).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5243) killing a shard in one collection can result in leader election in a different collection

2013-09-23 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775833#comment-13775833
 ] 

Yonik Seeley commented on SOLR-5243:


Do the shard split tests start up more than 3 cores per CoreContainer?  If not, 
there should be no impact.  If so, then the change in timing may have uncovered 
a different issue.

> killing a shard in one collection can result in leader election in a 
> different collection
> -
>
> Key: SOLR-5243
> URL: https://issues.apache.org/jira/browse/SOLR-5243
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Yonik Seeley
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-5243.patch, SOLR-5243.patch
>
>
> Discovered while doing some more ad-hoc testing... if I create two 
> collections with the same shard name and then kill the leader in one, it can 
> sometimes cause a leader election in the other (leaving the first leaderless).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5261) can't update current trunk or 4x with 4.4 or earlier binary protocol

2013-09-23 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-5261.


Resolution: Fixed

> can't update current trunk or 4x with 4.4 or earlier binary protocol
> 
>
> Key: SOLR-5261
> URL: https://issues.apache.org/jira/browse/SOLR-5261
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Blocker
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-5261.patch
>
>
> Seems back compat in the binary protocol was broke broke sometime after 4.4

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 4.5.0 RC1

2013-09-23 Thread Yonik Seeley
The fix has been committed to the 45 branch.
Given how much pain was caused the last time the binary format changed
(the change from modified-UTF8 to normal UTF-8), I think this warrants
a 4.5 re-spin.

-Yonik
http://lucidworks.com

On Mon, Sep 23, 2013 at 12:08 PM, Yonik Seeley  wrote:
> Folks... I discovered a serious back compat issue:
> https://issues.apache.org/jira/browse/SOLR-5261
>
> Looking into it now...
>
> -Yonik
> http://lucidworks.com
>
>
> On Thu, Sep 19, 2013 at 1:56 PM, Adrien Grand  wrote:
>> Here is a new release candidate. Difference with the previous
>> candidate is that this RC1 now has LUCENE-5223 as well as the missing
>> commit from SOLR-4221:
>> http://people.apache.org/~jpountz/staging_area/lucene-solr-4.5.0-RC1-rev1524755/
>>
>> This vote is open until Tuesday.
>>
>> Smoke tester was happy on my end so here is my +1.
>>
>> --
>> Adrien
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5261) can't update current trunk or 4x with 4.4 or earlier binary protocol

2013-09-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775812#comment-13775812
 ] 

ASF subversion and git services commented on SOLR-5261:
---

Commit 1525748 from [~yo...@apache.org] in branch 'dev/branches/lucene_solr_4_5'
[ https://svn.apache.org/r1525748 ]

SOLR-5261: fix javabin block indexing back compat

> can't update current trunk or 4x with 4.4 or earlier binary protocol
> 
>
> Key: SOLR-5261
> URL: https://issues.apache.org/jira/browse/SOLR-5261
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Blocker
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-5261.patch
>
>
> Seems back compat in the binary protocol was broke broke sometime after 4.4

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5261) can't update current trunk or 4x with 4.4 or earlier binary protocol

2013-09-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775807#comment-13775807
 ] 

ASF subversion and git services commented on SOLR-5261:
---

Commit 1525744 from [~yo...@apache.org] in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1525744 ]

SOLR-5261: fix javabin block indexing back compat

> can't update current trunk or 4x with 4.4 or earlier binary protocol
> 
>
> Key: SOLR-5261
> URL: https://issues.apache.org/jira/browse/SOLR-5261
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Blocker
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-5261.patch
>
>
> Seems back compat in the binary protocol was broke broke sometime after 4.4

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5264) New method on NamedList to return one or many config arguments as collection

2013-09-23 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775798#comment-13775798
 ] 

Shawn Heisey commented on SOLR-5264:


Patch isn't complete. I will fix and re-upload later.

> New method on NamedList to return one or many config arguments as collection
> 
>
> Key: SOLR-5264
> URL: https://issues.apache.org/jira/browse/SOLR-5264
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 4.5
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Fix For: 5.0, 4.6
>
> Attachments: SOLR-5264.patch
>
>
> In the FieldMutatingUpdateProcessorFactory is a method called "oneOrMany" 
> that takes all of the entries in a NamedList and pulls them out into a 
> Collection.  I'd like to use that in a custom update processor I'm building.
> It seems as though this functionality would be right at home as part of 
> NamedList itself.  Here's a patch that moves the method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5261) can't update current trunk or 4x with 4.4 or earlier binary protocol

2013-09-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775786#comment-13775786
 ] 

ASF subversion and git services commented on SOLR-5261:
---

Commit 1525732 from [~yo...@apache.org] in branch 'dev/trunk'
[ https://svn.apache.org/r1525732 ]

SOLR-5261: fix javabin block indexing back compat

> can't update current trunk or 4x with 4.4 or earlier binary protocol
> 
>
> Key: SOLR-5261
> URL: https://issues.apache.org/jira/browse/SOLR-5261
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Blocker
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-5261.patch
>
>
> Seems back compat in the binary protocol was broke broke sometime after 4.4

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5261) can't update current trunk or 4x with 4.4 or earlier binary protocol

2013-09-23 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-5261:
---

Attachment: SOLR-5261.patch

The issue was caused by the block indexing changes.  Here's a patch that's 
fully back compatible if you're not sending child docs.

> can't update current trunk or 4x with 4.4 or earlier binary protocol
> 
>
> Key: SOLR-5261
> URL: https://issues.apache.org/jira/browse/SOLR-5261
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Blocker
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-5261.patch
>
>
> Seems back compat in the binary protocol was broke broke sometime after 4.4

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5261) can't update current trunk or 4x with 4.4 or earlier binary protocol

2013-09-23 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley updated SOLR-5261:
---

 Priority: Blocker  (was: Major)
Fix Version/s: 5.0
   4.5
 Assignee: Yonik Seeley

> can't update current trunk or 4x with 4.4 or earlier binary protocol
> 
>
> Key: SOLR-5261
> URL: https://issues.apache.org/jira/browse/SOLR-5261
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Blocker
> Fix For: 4.5, 5.0
>
>
> Seems back compat in the binary protocol was broke broke sometime after 4.4

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Replication and SolrCloud

2013-09-23 Thread Shawn Heisey

On 9/23/2013 2:06 PM, Erick Erickson wrote:

Let's say a configuration is running SolrCloud _and_ has  or   bits defined in the replication
handler. Is it valid? Taken care of? Is it worth a JIRA to barf if we
detect that condition?

Because it strikes me as something that's at worst undefined behavior,
at best ignored and somewhere in the middle does replications as well
as peer synchs as well as distributed updates.

Under any circumstances it doesn't seem like the user is doing the right thing.


Initial thought: Yes, detect and explode.

Second thought: Allowing replication config for the expert user 
(possibly for backup purposes) might be useful.


Third thought: Yes, detect and explode.  If someone wanted to write an 
application that used the handler as a direct API rather than through 
solrconfig.xml configuration, that would work with no problem. 
SolrCloud basically requires that the /replication handler be enabled, 
but not configured.


Is the replication API fully documented anywhere?  It might be nice to 
provide a skeletal example java application that talks to the 
replication API for simple index backup purposes.  It would be 
particularly nice if it used CloudSolrServer (or the ZK client classes) 
and showed how to back up and restore multiple shards.  If I had any 
idea how to write such an application, I would have already gotten 
started on it.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_40) - Build # 7593 - Failure!

2013-09-23 Thread Robert Muir
MockGraphTokenFilter threw NPE here because its "random" was never set.

I committed a fix with a better error (and also made it reset its
"random" to null on close() to ensure its pickier and not sneakily
consuming random bits and so on)

On Mon, Sep 23, 2013 at 3:08 PM, Robert Muir  wrote:
> I am looking at it: the issue is in MockGraphTokenFilter...
>
> first i will make a commit so when it gets an unexpected exception
> (like the NPE here), that you get full stack trace!
>
> On Mon, Sep 23, 2013 at 1:37 PM, Uwe Schindler  wrote:
>> Hey,
>> I will take care tomorrow. This is related to Robert's and my changes.
>> Uwe
>>
>>
>>
>> Policeman Jenkins Server  schrieb:
>>>
>>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/7593/
>>> Java: 32bit/jdk1.7.0_40 -client -XX:+UseParallelGC
>>>
>>> 1 tests failed.
>>> REGRESSION:
>>> org.apache.lucene.analysis.core.TestRandomChains.testRandomChains
>>>
>>> Error Message:
>>> got wrong exception when reset() not called:
>>> java.lang.NullPointerException
>>>
>>> Stack Trace:
>>> java.lang.AssertionError: got wrong exception when reset() not called:
>>> java.lang.NullPointerException
>>>  at
>>> __randomizedtesting.SeedInfo.seed([D41D4BF8A0CA5DD4:E9FC6299E7D84014]:0)
>>>  at org.junit.Assert.fail(Assert.java:93)
>>>  at
>>> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:395)
>>>  at
>>> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:476)
>>>  at
>>>
>>> org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:909)
>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>>  at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>  at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>  at java.lang.reflect.Method.invoke(Method.java:606)
>>>  at
>>> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
>>>  at
>>> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
>>>  at
>>> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
>>>  at
>>> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
>>>  at
>>> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
>>>  at
>>>
>>> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>>>  at
>>> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
>>>  at
>>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>>>  at
>>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>>>  at
>>> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
>>>  at
>>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
>>>  at
>>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>>>  at
>>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>>  at
>>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
>>>  at
>>>
>>> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
>>>  at
>>> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
>>>  at
>>> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
>>>  at
>>> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
>>>  at
>>> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
>>>  at
>>> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
>>>  at
>>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>>>  at
>>> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
>>>  at
>>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>>>  at
>>>
>>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>>>  at
>>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>>>  at
>>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>>  at
>>> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
>>>  at
>>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure

[jira] [Updated] (SOLR-5264) New method on NamedList to return one or many config arguments as collection

2013-09-23 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-5264:
---

Attachment: SOLR-5264.patch

Attaching patch.  I'm sure there's a lot to not like about it, so I'd like 
comments in two areas.  1) Is the general idea sound?  2) What specifically 
could be done better?

It did occur to me that the more generic method I mentioned in the TODO would 
in fact be the best approach right up front.  That would name the method 
removeCollection instead of removeArgsCollection, returning Collection. 
 It would then be up to the caller to decide what object types constitute an 
error condition.  Thoughts?

> New method on NamedList to return one or many config arguments as collection
> 
>
> Key: SOLR-5264
> URL: https://issues.apache.org/jira/browse/SOLR-5264
> Project: Solr
>  Issue Type: Improvement
>  Components: clients - java
>Affects Versions: 4.5
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Fix For: 5.0, 4.6
>
> Attachments: SOLR-5264.patch
>
>
> In the FieldMutatingUpdateProcessorFactory is a method called "oneOrMany" 
> that takes all of the entries in a NamedList and pulls them out into a 
> Collection.  I'd like to use that in a custom update processor I'm building.
> It seems as though this functionality would be right at home as part of 
> NamedList itself.  Here's a patch that moves the method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5264) New method on NamedList to return one or many config arguments as collection

2013-09-23 Thread Shawn Heisey (JIRA)
Shawn Heisey created SOLR-5264:
--

 Summary: New method on NamedList to return one or many config 
arguments as collection
 Key: SOLR-5264
 URL: https://issues.apache.org/jira/browse/SOLR-5264
 Project: Solr
  Issue Type: Improvement
  Components: clients - java
Affects Versions: 4.5
Reporter: Shawn Heisey
Assignee: Shawn Heisey
Priority: Minor
 Fix For: 5.0, 4.6


In the FieldMutatingUpdateProcessorFactory is a method called "oneOrMany" that 
takes all of the entries in a NamedList and pulls them out into a Collection.  
I'd like to use that in a custom update processor I'm building.

It seems as though this functionality would be right at home as part of 
NamedList itself.  Here's a patch that moves the method.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_40) - Build # 7593 - Failure!

2013-09-23 Thread Robert Muir
I am looking at it: the issue is in MockGraphTokenFilter...

first i will make a commit so when it gets an unexpected exception
(like the NPE here), that you get full stack trace!

On Mon, Sep 23, 2013 at 1:37 PM, Uwe Schindler  wrote:
> Hey,
> I will take care tomorrow. This is related to Robert's and my changes.
> Uwe
>
>
>
> Policeman Jenkins Server  schrieb:
>>
>> Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/7593/
>> Java: 32bit/jdk1.7.0_40 -client -XX:+UseParallelGC
>>
>> 1 tests failed.
>> REGRESSION:
>> org.apache.lucene.analysis.core.TestRandomChains.testRandomChains
>>
>> Error Message:
>> got wrong exception when reset() not called:
>> java.lang.NullPointerException
>>
>> Stack Trace:
>> java.lang.AssertionError: got wrong exception when reset() not called:
>> java.lang.NullPointerException
>>  at
>> __randomizedtesting.SeedInfo.seed([D41D4BF8A0CA5DD4:E9FC6299E7D84014]:0)
>>  at org.junit.Assert.fail(Assert.java:93)
>>  at
>> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:395)
>>  at
>> org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:476)
>>  at
>>
>> org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:909)
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>  at
>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>  at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>  at java.lang.reflect.Method.invoke(Method.java:606)
>>  at
>> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
>>  at
>> com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
>>  at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
>>  at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
>>  at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
>>  at
>>
>> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>>  at
>> org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
>>  at
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>>  at
>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>>  at
>> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
>>  at
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
>>  at
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>>  at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
>>  at
>>
>> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
>>  at
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
>>  at
>> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
>>  at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
>>  at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
>>  at
>> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
>>  at
>> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>>  at
>> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
>>  at
>> com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>>  at
>>
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>>  at
>> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>>  at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at
>> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
>>  at
>> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>>  at
>> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
>>  at
>> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
>>  at
>> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>>  at
>>
>> com.carrotsearch.randomizedtesting.ThreadLeakControl$Sta

[jira] [Commented] (SOLR-5262) implicit solr.core.* properties should always be available, regardless of wether underlying core.property is specified

2013-09-23 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13775704#comment-13775704
 ] 

Hoss Man commented on SOLR-5262:


once resolved, this ref guide page needs updated, note comment...

https://cwiki.apache.org/confluence/display/solr/Configuring+solrconfig.xml?focusedCommentId=34023973&#comment-34023973

> implicit solr.core.* properties should always be available, regardless of 
> wether underlying core.property is specified
> --
>
> Key: SOLR-5262
> URL: https://issues.apache.org/jira/browse/SOLR-5262
> Project: Solr
>  Issue Type: Bug
>Reporter: Hoss Man
>
> reviewing the docs for core.properties and implicit property substitution, i 
> noticed what seems to be a bug in how implicit properties are made available 
> for config files.
> if you look at CoreDescriptor.buildSubstitutableProperties, the logic only 
> loops over the names found in "coreProperties" -- meaning that if a user 
> doesn't explicitly set one of the "standard" properties there, then the 
> corrisponding "solr.core.propname" implicit value (with the default value) 
> will not be available.
> the point of the implicit properties is that they should *always* be 
> available for use in configs, even if the value comes from the hardcoded 
> default, or is derived from something else.
> (ie: if you put this in the example solrconfig.xml...
> {noformat}
>  
>all
>10
>text
>${solr.core.ulogDir}
>  
> {noformat}
> ...solr will fail to start up, unless you also add an explicit "ulogDir=tlog" 
> to the core.properties file -- but this should work w/o the user explicitly 
> configuring the ulogDir property

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5263) CloudSolrServer URL cache update race

2013-09-23 Thread Jessica Cheng (JIRA)
Jessica Cheng created SOLR-5263:
---

 Summary: CloudSolrServer URL cache update race
 Key: SOLR-5263
 URL: https://issues.apache.org/jira/browse/SOLR-5263
 Project: Solr
  Issue Type: Bug
  Components: clients - java, SolrCloud
Affects Versions: 4.4
Reporter: Jessica Cheng


In CloudSolrServer.request, urlLists (and the like) is updated if 
lastClusterStateHashCode is different from the current hash code of 
clusterState. However, each time this happen, only the cache entry for the 
current collection being requested is updated. In the following condition this 
causes a race:

query collection A so a cache entry exists
update collection A
query collection B, request method notices the hash code changed and update 
cache for collection B, updates lastClusterStateHashCode
query collection A, since lastClusterStateHashCode has been updated, no update 
for cache for collection A even though it's stale

Can fix one of two ways:
1. Track lastClusterStateHashCode per collection and lazily update each entry
2. Every time we notice lastClusterStateHashCode != clusterState.hashCode(),
   2a. rebuild the entire cache for all collections
   2b. clear all current cache for collections

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5262) implicit solr.core.* properties should always be available, regardless of wether underlying core.property is specified

2013-09-23 Thread Hoss Man (JIRA)
Hoss Man created SOLR-5262:
--

 Summary: implicit solr.core.* properties should always be 
available, regardless of wether underlying core.property is specified
 Key: SOLR-5262
 URL: https://issues.apache.org/jira/browse/SOLR-5262
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man


reviewing the docs for core.properties and implicit property substitution, i 
noticed what seems to be a bug in how implicit properties are made available 
for config files.

if you look at CoreDescriptor.buildSubstitutableProperties, the logic only 
loops over the names found in "coreProperties" -- meaning that if a user 
doesn't explicitly set one of the "standard" properties there, then the 
corrisponding "solr.core.propname" implicit value (with the default value) will 
not be available.

the point of the implicit properties is that they should *always* be available 
for use in configs, even if the value comes from the hardcoded default, or is 
derived from something else.

(ie: if you put this in the example solrconfig.xml...

{noformat}
 
   all
   10
   text
   ${solr.core.ulogDir}
 
{noformat}

...solr will fail to start up, unless you also add an explicit "ulogDir=tlog" 
to the core.properties file -- but this should work w/o the user explicitly 
configuring the ulogDir property

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_40) - Build # 7593 - Failure!

2013-09-23 Thread Uwe Schindler
Hey, 
I will take care tomorrow.  This is related to Robert's and my changes.
Uwe



Policeman Jenkins Server  schrieb:
>Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/7593/
>Java: 32bit/jdk1.7.0_40 -client -XX:+UseParallelGC
>
>1 tests failed.
>REGRESSION: 
>org.apache.lucene.analysis.core.TestRandomChains.testRandomChains
>
>Error Message:
>got wrong exception when reset() not called:
>java.lang.NullPointerException
>
>Stack Trace:
>java.lang.AssertionError: got wrong exception when reset() not called:
>java.lang.NullPointerException
>   at
>__randomizedtesting.SeedInfo.seed([D41D4BF8A0CA5DD4:E9FC6299E7D84014]:0)
>   at org.junit.Assert.fail(Assert.java:93)
>   at
>org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:395)
>   at
>org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:476)
>   at
>org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:909)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at
>sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at
>sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at
>com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
>   at
>com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
>   at
>com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
>   at
>com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
>   at
>com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
>   at
>org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
>   at
>org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
>   at
>org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>   at
>com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>   at
>org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
>   at
>org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
>   at
>org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>   at
>com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at
>com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
>   at
>com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
>   at
>com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
>   at
>com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
>   at
>com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
>   at
>com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
>   at
>com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
>   at
>org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
>   at
>org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
>   at
>com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
>   at
>com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>   at
>com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
>   at
>com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at
>org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
>   at
>org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
>   at
>org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
>   at
>org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
>   at
>com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
>   at
>com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
>   at java.lang.Thread.run(Thread.java:724)
>
>
>
>
>Build Log:
>[...truncated 5083 lines...]
>   [junit4] Suite: org.apache.lucene.analysis.core.Tes

[jira] [Created] (LUCENE-5239) Scary TestSearcherManager failure

2013-09-23 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-5239:
--

 Summary: Scary TestSearcherManager failure
 Key: LUCENE-5239
 URL: https://issues.apache.org/jira/browse/LUCENE-5239
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless


http://builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/58093 hit 
a spooky failure, where it looks like the wrong document is deleted.

It doesn't reproduce easily, but after beasting I was finally able to reproduce 
it.  But when I run with -verbose it won't fail for me ... but does for Shai!

Details:

{noformat}
   [junit4] Suite: org.apache.lucene.search.TestSearcherManager
   [junit4]   1> doc id=0 is not supposed to be deleted, but got hitCount=0
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestSearcherManager 
-Dtests.method=testSearcherManager -Dtests.seed=6A8BC03A6E804E02 
-Dtests.slow=true -Dtests.locale=fr_LU -Dtests.timezone=Africa/Algiers 
-Dtests.file.encoding=UTF-8
   [junit4] FAILURE 4.98s J2 | TestSearcherManager.testSearcherManager <<<
   [junit4]> Throwable #1: java.lang.AssertionError
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([6A8BC03A6E804E02:66A48880D8892FFF]:0)
   [junit4]>at 
org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.runTest(ThreadedIndexingAndSearchingTestCase.java:607)
   [junit4]>at 
org.apache.lucene.search.TestSearcherManager.testSearcherManager(TestSearcherManager.java:56)
   [junit4]>at java.lang.Thread.run(Thread.java:722)
   [junit4]   2> Sep 23, 2013 2:10:14 PM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> WARNING: Will linger awaiting termination of 8 leaked 
thread(s).
   [junit4]   2> Sep 23, 2013 2:10:34 PM 
com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks
   [junit4]   2> SEVERE: 8 threads leaked from SUITE scope at 
org.apache.lucene.search.TestSearcherManager: 
   [junit4]   2>1) Thread[id=219, name=TestSearcherManager-1-thread-6, 
state=TIMED_WAITING, group=TGRP-TestSearcherManager]
   [junit4]   2> at sun.misc.Unsafe.park(Native Method)
   [junit4]   2> at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
   [junit4]   2> at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
   [junit4]   2> at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
   [junit4]   2> at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
   [junit4]   2> at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
   [junit4]   2> at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
   [junit4]   2> at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   [junit4]   2> at java.lang.Thread.run(Thread.java:722)
   [junit4]   2>2) Thread[id=220, name=TestSearcherManager-1-thread-7, 
state=TIMED_WAITING, group=TGRP-TestSearcherManager]
   [junit4]   2> at sun.misc.Unsafe.park(Native Method)
   [junit4]   2> at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
   [junit4]   2> at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
   [junit4]   2> at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
   [junit4]   2> at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
   [junit4]   2> at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
   [junit4]   2> at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
   [junit4]   2> at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
   [junit4]   2> at java.lang.Thread.run(Thread.java:722)
   [junit4]   2>3) Thread[id=217, name=TestSearcherManager-1-thread-4, 
state=TIMED_WAITING, group=TGRP-TestSearcherManager]
   [junit4]   2> at sun.misc.Unsafe.park(Native Method)
   [junit4]   2> at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:226)
   [junit4]   2> at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
   [junit4]   2> at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:359)
   [junit4]   2> at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:942)
   [junit4]   2> at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1043)
   [junit4]   2> at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1103)
   [junit4]   2> at 
java.util.concurrent.ThreadPoolE

Replication and SolrCloud

2013-09-23 Thread Erick Erickson
Let's say a configuration is running SolrCloud _and_ has  or   bits defined in the replication
handler. Is it valid? Taken care of? Is it worth a JIRA to barf if we
detect that condition?

Because it strikes me as something that's at worst undefined behavior,
at best ignored and somewhere in the middle does replications as well
as peer synchs as well as distributed updates.

Under any circumstances it doesn't seem like the user is doing the right thing.

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5243) killing a shard in one collection can result in leader election in a different collection

2013-09-23 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774752#comment-13774752
 ] 

Shalin Shekhar Mangar commented on SOLR-5243:
-

I think this fix either caused a bug or uncovered a bug in shard splitting. 
ShardSplitTest has been failing sporadically since this was committed.

Mark/Yonik, just off the top of your head, any idea why that would happen?

http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-MacOSX/828/
http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/7589/

> killing a shard in one collection can result in leader election in a 
> different collection
> -
>
> Key: SOLR-5243
> URL: https://issues.apache.org/jira/browse/SOLR-5243
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Yonik Seeley
>Assignee: Mark Miller
>Priority: Blocker
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-5243.patch, SOLR-5243.patch
>
>
> Discovered while doing some more ad-hoc testing... if I create two 
> collections with the same shard name and then kill the leader in one, it can 
> sometimes cause a leader election in the other (leaving the first leaderless).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.7.0_40) - Build # 7593 - Failure!

2013-09-23 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/7593/
Java: 32bit/jdk1.7.0_40 -client -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.lucene.analysis.core.TestRandomChains.testRandomChains

Error Message:
got wrong exception when reset() not called: java.lang.NullPointerException

Stack Trace:
java.lang.AssertionError: got wrong exception when reset() not called: 
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([D41D4BF8A0CA5DD4:E9FC6299E7D84014]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkResetException(BaseTokenStreamTestCase.java:395)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:476)
at 
org.apache.lucene.analysis.core.TestRandomChains.testRandomChains(TestRandomChains.java:909)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1559)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$600(RandomizedRunner.java:79)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:773)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:787)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.TestRuleFieldCacheSanity$1.evaluate(TestRuleFieldCacheSanity.java:51)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:782)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:442)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:746)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:648)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:682)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:693)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:70)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:358)
at java.lang.Thread.run(Thread.java:724)




Build Log:
[...truncated 5083 lines...]
   [junit4] Suite: org.apache.lucene.analysis.core.TestRandomChains
   [junit4]   2> Exception from random analyzer: 
   [junit4]   2> charfilters=
   [junit4]   2> tokenizer=
   [junit4]   2>  

[jira] [Comment Edited] (LUCENE-5109) EliasFano value index

2013-09-23 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774514#comment-13774514
 ] 

Paul Elschot edited comment on LUCENE-5109 at 9/23/13 5:39 PM:
---

Patch of 23 september: as announced yesterday.

I tried benchmarking with index divisor 128 instead of 256. It is indeed a 
little bit faster for far advanceTo operations.

I used this code snippet in the benchmark to avoid the EliasFanoDocIdSet being 
used when it is not advisable:

{code}
new DocIdSetFactory() {
  @Override
  public DocIdSet copyOf(FixedBitSet set) throws IOException {
int numValues = set.cardinality();
int upperBound = set.prevSetBit(set.length() - 1);
if (EliasFanoDocIdSet.sufficientlySmallerThanBitSet(numValues, 
upperBound)) {
  final EliasFanoDocIdSet copy = new EliasFanoDocIdSet(numValues, 
upperBound));
  copy.encodeFromDisi(set.iterator());
  return copy;
} else {
  return set;
}
  }
}
{code}

The sufficientlySmallerThanBitSet method currently checks for upperbound/7 > 
numValues.
That used to be a division by 6, I added 1 because the index was added.

Anyway, "advisable" will depend on better benchmarking than I can do...

  was (Author: paul.elsc...@xs4all.nl):
Patch of 23 september: as announced yesterday.

I tried benchmarking with index divisor 128 instead of 256. It is indeed a 
little bit faster for far advanceTo operations.

I used this code snippet in the benchmark to avoid the EliasFanoDocIdSet being 
used when it is not advisable:

{code}
new DocIdSetFactory() {
  @Override
  public DocIdSet copyOf(FixedBitSet set) throws IOException {
long numValues = set.cardinality();
long upperBound = set.prevSetBit(set.length() - 1);
if (EliasFanoDocIdSet.sufficientlySmallerThanBitSet(numValues, 
upperBound)) {
  final EliasFanoDocIdSet copy = new EliasFanoDocIdSet(numValues, 
upperBound));
  copy.encodeFromDisi(set.iterator());
  return copy;
} else {
  return set;
}
  }
}
{code}

The sufficientlySmallerThanBitSet method currently checks for upperbound/7 > 
numValues.
That used to be a division by 6, I added 1 because the index was added.

Anyway, "advisable" will depend on better benchmarking than I can do...
  
> EliasFano value index
> -
>
> Key: LUCENE-5109
> URL: https://issues.apache.org/jira/browse/LUCENE-5109
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Reporter: Paul Elschot
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-5109.patch, LUCENE-5109.patch, LUCENE-5109.patch
>
>
> Index upper bits of Elias-Fano sequence.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5183) Add block support for JSONLoader

2013-09-23 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774667#comment-13774667
 ] 

Varun Thacker commented on SOLR-5183:
-

Can we finalize the format? Personally I am okay with [~mkhludnev] suggestion.

> Add block support for JSONLoader
> 
>
> Key: SOLR-5183
> URL: https://issues.apache.org/jira/browse/SOLR-5183
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Varun Thacker
> Fix For: 4.5, 5.0
>
> Attachments: SOLR-5183.patch
>
>
> We should be able to index block documents in JSON format

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 4.5.0 RC1

2013-09-23 Thread Simon Willnauer
SUCCESS! [1:04:30.537779]

+1 to release! thanks adrien!

On Sat, Sep 21, 2013 at 3:32 AM, Chris Hostetter
 wrote:
>
> : : > 
> http://people.apache.org/~jpountz/staging_area/lucene-solr-4.5.0-RC1-rev1524755/
>
> Once i fixed the javadoc linter workarround to hte 4_5 branch, I
> found no other problems with RC1 other then LUCENE-5233 -- and i certainly
> don't think LUCENE-5233 is significant enough to warrant a re-spin.
>
>
> So i vote +1 based on the following SHA1 files...
>
> 407d517272961cc09b5b2a6dc7f414c033c2a842 *lucene-4.5.0-src.tgz
> cb55b9fb36296e233d10b4dd0061af32947f1056 *lucene-4.5.0.tgz
> 82ed448175508792be960d31de05ea7e2815791e *lucene-4.5.0.zip
> 6db41833bf6763ec3b704cb343f59b779c16a841 *solr-4.5.0-src.tgz
> e9150dd7c1f6046f5879196ea266505613f26506 *solr-4.5.0.tgz
> 0c7d4bcb5c29f67f2722b1255a5da803772c03a5 *solr-4.5.0.zip
>
>
>
> -Hoss
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5261) can't update current trunk or 4x with 4.4 or earlier binary protocol

2013-09-23 Thread Yonik Seeley (JIRA)
Yonik Seeley created SOLR-5261:
--

 Summary: can't update current trunk or 4x with 4.4 or earlier 
binary protocol
 Key: SOLR-5261
 URL: https://issues.apache.org/jira/browse/SOLR-5261
 Project: Solr
  Issue Type: Bug
Reporter: Yonik Seeley


Seems back compat in the binary protocol was broke broke sometime after 4.4

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [VOTE] Release Lucene/Solr 4.5.0 RC1

2013-09-23 Thread Yonik Seeley
Folks... I discovered a serious back compat issue:
https://issues.apache.org/jira/browse/SOLR-5261

Looking into it now...

-Yonik
http://lucidworks.com


On Thu, Sep 19, 2013 at 1:56 PM, Adrien Grand  wrote:
> Here is a new release candidate. Difference with the previous
> candidate is that this RC1 now has LUCENE-5223 as well as the missing
> commit from SOLR-4221:
> http://people.apache.org/~jpountz/staging_area/lucene-solr-4.5.0-RC1-rev1524755/
>
> This vote is open until Tuesday.
>
> Smoke tester was happy on my end so here is my +1.
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5261) can't update current trunk or 4x with 4.4 or earlier binary protocol

2013-09-23 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774673#comment-13774673
 ] 

Yonik Seeley commented on SOLR-5261:


Using a 4.4 SolrJ client with binary requests against the current 4.5 branch 
gives the following exception on the server and a 500 error on the client:
{code}
 ERROR org.apache.solr.core.SolrCore  – java.lang.ClassCastException: 
java.util.ArrayList cannot be cast to java.lang.Float
at 
org.apache.solr.common.util.JavaBinCodec.readSolrInputDocument(JavaBinCodec.java:364)
{code}

> can't update current trunk or 4x with 4.4 or earlier binary protocol
> 
>
> Key: SOLR-5261
> URL: https://issues.apache.org/jira/browse/SOLR-5261
> Project: Solr
>  Issue Type: Bug
>Reporter: Yonik Seeley
>
> Seems back compat in the binary protocol was broke broke sometime after 4.4

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5259) Typo in error message from missing / wrong _version_ field

2013-09-23 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774627#comment-13774627
 ] 

Shalin Shekhar Mangar commented on SOLR-5259:
-

Committed r152560 to trunk, r1525621 to branch_4x and r1525622 to 
lucene_solr_4_5.

> Typo in error message from missing / wrong _version_ field
> --
>
> Key: SOLR-5259
> URL: https://issues.apache.org/jira/browse/SOLR-5259
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Benson Margulies
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.5, 5.0
>
>
> Note the missing space between _version_ and field.
> Caused by: org.apache.solr.common.SolrException: Unable to use updateLog: 
> _version_field must exist in schema, using indexed="true" stored="true" and 
> multiValued="false" (_version_ is not indexed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5249) ClassNotFoundException due to white-spaces in solrconfig.xml

2013-09-23 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774626#comment-13774626
 ] 

Uwe Schindler commented on SOLR-5249:
-

Hi Simon,
As I said before: If you want to trim() the class names, do it on the config 
parser level and not in SolrResourceLoader. Be free to submit a patch that 
makes the solrconfig/solrschema parsing trim() class names!

> ClassNotFoundException due to white-spaces in solrconfig.xml
> 
>
> Key: SOLR-5249
> URL: https://issues.apache.org/jira/browse/SOLR-5249
> Project: Solr
>  Issue Type: Bug
>Reporter: Simon Endele
>Priority: Minor
> Attachments: SolrResourceLoader.java.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Due to auto-formatting by an text editor/IDE there may be line-breaks after 
> class names in the solrconfig.xml, for example:
> {code:xml}
>   
>   suggest
>name="classname">org.apache.solr.spelling.suggest.Suggester
>name="lookupImpl">org.apache.solr.spelling.suggest.fst.WFSTLookupFactory
>   
>   [...]
>   
> {code}
> This will raise an exception in SolrResourceLoader as the white-spaces are 
> not stripped from the class name:
> {code}Caused by: org.apache.solr.common.SolrException: Error loading class 
> 'org.apache.solr.spelling.suggest.fst.WFSTLookupFactory
>   '
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:449)
>   at 
> org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:471)
>   at 
> org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:467)
>   at org.apache.solr.spelling.suggest.Suggester.init(Suggester.java:102)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.inform(SpellCheckComponent.java:623)
>   at 
> org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:601)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:830)
>   ... 13 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.solr.spelling.suggest.fst.WFSTLookupFactory
>   
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>   at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:264)
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:433)
>   ... 19 more{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-5259) Typo in error message from missing / wrong _version_ field

2013-09-23 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-5259.
-

   Resolution: Fixed
Fix Version/s: 5.0
   4.5
 Assignee: Shalin Shekhar Mangar

> Typo in error message from missing / wrong _version_ field
> --
>
> Key: SOLR-5259
> URL: https://issues.apache.org/jira/browse/SOLR-5259
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Benson Margulies
>Assignee: Shalin Shekhar Mangar
> Fix For: 4.5, 5.0
>
>
> Note the missing space between _version_ and field.
> Caused by: org.apache.solr.common.SolrException: Unable to use updateLog: 
> _version_field must exist in schema, using indexed="true" stored="true" and 
> multiValued="false" (_version_ is not indexed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5259) Typo in error message from missing / wrong _version_ field

2013-09-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774625#comment-13774625
 ] 

ASF subversion and git services commented on SOLR-5259:
---

Commit 1525622 from sha...@apache.org in branch 'dev/branches/lucene_solr_4_5'
[ https://svn.apache.org/r1525622 ]

SOLR-5259: Fix typo in error message when _version_ field is missing

> Typo in error message from missing / wrong _version_ field
> --
>
> Key: SOLR-5259
> URL: https://issues.apache.org/jira/browse/SOLR-5259
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Benson Margulies
>
> Note the missing space between _version_ and field.
> Caused by: org.apache.solr.common.SolrException: Unable to use updateLog: 
> _version_field must exist in schema, using indexed="true" stored="true" and 
> multiValued="false" (_version_ is not indexed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-5249) ClassNotFoundException due to white-spaces in solrconfig.xml

2013-09-23 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-5249:


Comment: was deleted

(was: Commit 1525621 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1525621 ]

SOLR-5249: Fix typo in error message when _version_ field is missing)

> ClassNotFoundException due to white-spaces in solrconfig.xml
> 
>
> Key: SOLR-5249
> URL: https://issues.apache.org/jira/browse/SOLR-5249
> Project: Solr
>  Issue Type: Bug
>Reporter: Simon Endele
>Priority: Minor
> Attachments: SolrResourceLoader.java.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Due to auto-formatting by an text editor/IDE there may be line-breaks after 
> class names in the solrconfig.xml, for example:
> {code:xml}
>   
>   suggest
>name="classname">org.apache.solr.spelling.suggest.Suggester
>name="lookupImpl">org.apache.solr.spelling.suggest.fst.WFSTLookupFactory
>   
>   [...]
>   
> {code}
> This will raise an exception in SolrResourceLoader as the white-spaces are 
> not stripped from the class name:
> {code}Caused by: org.apache.solr.common.SolrException: Error loading class 
> 'org.apache.solr.spelling.suggest.fst.WFSTLookupFactory
>   '
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:449)
>   at 
> org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:471)
>   at 
> org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:467)
>   at org.apache.solr.spelling.suggest.Suggester.init(Suggester.java:102)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.inform(SpellCheckComponent.java:623)
>   at 
> org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:601)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:830)
>   ... 13 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.solr.spelling.suggest.fst.WFSTLookupFactory
>   
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>   at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:264)
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:433)
>   ... 19 more{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5259) Typo in error message from missing / wrong _version_ field

2013-09-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774617#comment-13774617
 ] 

ASF subversion and git services commented on SOLR-5259:
---

Commit 1525620 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1525620 ]

SOLR-5259: Fix typo in error message when _version_ field is missing

> Typo in error message from missing / wrong _version_ field
> --
>
> Key: SOLR-5259
> URL: https://issues.apache.org/jira/browse/SOLR-5259
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 4.4
>Reporter: Benson Margulies
>
> Note the missing space between _version_ and field.
> Caused by: org.apache.solr.common.SolrException: Unable to use updateLog: 
> _version_field must exist in schema, using indexed="true" stored="true" and 
> multiValued="false" (_version_ is not indexed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5249) ClassNotFoundException due to white-spaces in solrconfig.xml

2013-09-23 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774618#comment-13774618
 ] 

ASF subversion and git services commented on SOLR-5249:
---

Commit 1525621 from sha...@apache.org in branch 'dev/branches/branch_4x'
[ https://svn.apache.org/r1525621 ]

SOLR-5249: Fix typo in error message when _version_ field is missing

> ClassNotFoundException due to white-spaces in solrconfig.xml
> 
>
> Key: SOLR-5249
> URL: https://issues.apache.org/jira/browse/SOLR-5249
> Project: Solr
>  Issue Type: Bug
>Reporter: Simon Endele
>Priority: Minor
> Attachments: SolrResourceLoader.java.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Due to auto-formatting by an text editor/IDE there may be line-breaks after 
> class names in the solrconfig.xml, for example:
> {code:xml}
>   
>   suggest
>name="classname">org.apache.solr.spelling.suggest.Suggester
>name="lookupImpl">org.apache.solr.spelling.suggest.fst.WFSTLookupFactory
>   
>   [...]
>   
> {code}
> This will raise an exception in SolrResourceLoader as the white-spaces are 
> not stripped from the class name:
> {code}Caused by: org.apache.solr.common.SolrException: Error loading class 
> 'org.apache.solr.spelling.suggest.fst.WFSTLookupFactory
>   '
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:449)
>   at 
> org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:471)
>   at 
> org.apache.solr.core.SolrResourceLoader.newInstance(SolrResourceLoader.java:467)
>   at org.apache.solr.spelling.suggest.Suggester.init(Suggester.java:102)
>   at 
> org.apache.solr.handler.component.SpellCheckComponent.inform(SpellCheckComponent.java:623)
>   at 
> org.apache.solr.core.SolrResourceLoader.inform(SolrResourceLoader.java:601)
>   at org.apache.solr.core.SolrCore.(SolrCore.java:830)
>   ... 13 more
> Caused by: java.lang.ClassNotFoundException: 
> org.apache.solr.spelling.suggest.fst.WFSTLookupFactory
>   
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
>   at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:423)
>   at java.net.FactoryURLClassLoader.loadClass(URLClassLoader.java:789)
>   at java.lang.ClassLoader.loadClass(ClassLoader.java:356)
>   at java.lang.Class.forName0(Native Method)
>   at java.lang.Class.forName(Class.java:264)
>   at 
> org.apache.solr.core.SolrResourceLoader.findClass(SolrResourceLoader.java:433)
>   ... 19 more{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5260) Facet search on a docvalue field in a multi shard collection

2013-09-23 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774602#comment-13774602
 ] 

Trym Møller commented on SOLR-5260:
---

This only seems to be the case when faceting on a numeric field. Why 
SimpleFacets.java wants it in that way is not clear:
{code}
 if (ft.getNumericType() != null && sf.hasDocValues()) {
  // only fcs is able to leverage the numeric field caches
  method = FacetMethod.FCS;
}
{code}

> Facet search on a docvalue field in a multi shard collection
> 
>
> Key: SOLR-5260
> URL: https://issues.apache.org/jira/browse/SOLR-5260
> Project: Solr
>  Issue Type: Bug
>  Components: search, SolrCloud
>Affects Versions: 4.4
>Reporter: Trym Møller
>
> I have a problem doing facet search on a doc value field in a multi shard 
> collection.
> My Solr schema specifies fieldA as a docvalue type and I have created a two 
> shard collection using Solr 4.4.0 (and the unreleased 4.5 branch).
> When I do a facet search on fieldA with a "large" facet.limit then the query 
> fails with the below exception
> A "large" facet.limit seems to be when (10 + (facet.limit * 1,5)) * number of 
> shards > rows matching my query
> The exception does not occur when I run with a single shard collection.
> It can easily be reproduced by indexing a single row and querying it, as the 
> default facet.limit is 100.
> The facet query received by Solr looks as follows:
> {noformat}
> 576793 [qtp170860084-18] INFO  org.apache.solr.core.SolrCore  ¦ 
> [trym_shard2_replica1] webapp=/solr path=/select 
>  
> params={facet=true&start=0&q=*:*&distrib=true&collection=trym&facet.field=fieldA&wt=javabin&version=2&rows=0}
>  
>  status=500 QTime=20
> {noformat}
> One of the "internal query" send by Solr to its shard looks like
> {noformat}
> 576783 [qtp170860084-19] INFO  org.apache.solr.core.SolrCore  ¦ 
> [trym_shard1_replica1] webapp=/solr path=/select 
>  
> params={facet=true&distrib=false&collection=trym&wt=javabin&version=2&rows=0&NOW=1379855011787
> 
>
> &shard.url=192.168.56.1:8501/solr/trym_shard1_replica1/&df=text&fl=id,score&f.fieldA.facet.limit=160
>&start=0&q=*:*&facet.field=fieldA&isShard=true&fsv=true} 
>  hits=1 status=500 QTime=2
> {noformat}
> The exception thrown by Solr is as follows
> {noformat}
> 576784 [qtp170860084-17] ERROR org.apache.solr.servlet.SolrDispatchFilter  ¦ 
> null:java.lang.IllegalStateException: 
>  Cannot use facet.mincount=0 on a field which is not indexed
> at 
> org.apache.solr.request.NumericFacets.getCounts(NumericFacets.java:257)
> at 
> org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:423)
> at 
> org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:530)
> at 
> org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:259)
> at 
> org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:78)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:1904)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:659)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:362)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.Ha

[jira] [Updated] (SOLR-5260) Facet search on a docvalue field in a multi shard collection

2013-09-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trym Møller updated SOLR-5260:
--

Description: 
I have a problem doing facet search on a doc value field in a multi shard 
collection.

My Solr schema specifies fieldA as a docvalue type and I have created a two 
shard collection using Solr 4.4.0 (and the unreleased 4.5 branch).
When I do a facet search on fieldA with a "large" facet.limit then the query 
fails with the below exception
A "large" facet.limit seems to be when (10 + (facet.limit * 1,5)) * number of 
shards > rows matching my query

The exception does not occur when I run with a single shard collection.
It can easily be reproduced by indexing a single row and querying it, as the 
default facet.limit is 100.

The facet query received by Solr looks as follows:
{noformat}
576793 [qtp170860084-18] INFO  org.apache.solr.core.SolrCore  ¦ 
[trym_shard2_replica1] webapp=/solr path=/select 
 
params={facet=true&start=0&q=*:*&distrib=true&collection=trym&facet.field=fieldA&wt=javabin&version=2&rows=0}
 
 status=500 QTime=20
{noformat}
One of the "internal query" send by Solr to its shard looks like
{noformat}
576783 [qtp170860084-19] INFO  org.apache.solr.core.SolrCore  ¦ 
[trym_shard1_replica1] webapp=/solr path=/select 
 
params={facet=true&distrib=false&collection=trym&wt=javabin&version=2&rows=0&NOW=1379855011787

   
&shard.url=192.168.56.1:8501/solr/trym_shard1_replica1/&df=text&fl=id,score&f.fieldA.facet.limit=160
   &start=0&q=*:*&facet.field=fieldA&isShard=true&fsv=true} 
 hits=1 status=500 QTime=2
{noformat}

{noformat}
576784 [qtp170860084-17] ERROR org.apache.solr.servlet.SolrDispatchFilter  ¦ 
null:java.lang.IllegalStateException: Cannot use facet.mincount=0 on a field 
which is not indexed
at 
org.apache.solr.request.NumericFacets.getCounts(NumericFacets.java:257)
at 
org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:423)
at 
org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:530)
at 
org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:259)
at 
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:78)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1904)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:659)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:362)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThr

[jira] [Updated] (SOLR-5260) Facet search on a docvalue field in a multi shard collection

2013-09-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trym Møller updated SOLR-5260:
--

Description: 
I have a problem doing facet search on a doc value field in a multi shard 
collection.

My Solr schema specifies fieldA as a docvalue type and I have created a two 
shard collection using Solr 4.4.0 (and the unreleased 4.5 branch).
When I do a facet search on fieldA with a "large" facet.limit then the query 
fails with the below exception
A "large" facet.limit seems to be when (10 + (facet.limit * 1,5)) * number of 
shards > rows matching my query

The exception does not occur when I run with a single shard collection.
It can easily be reproduced by indexing a single row and querying it, as the 
default facet.limit is 100.

The facet query received by Solr looks as follows:
{noformat}
576793 [qtp170860084-18] INFO  org.apache.solr.core.SolrCore  ¦ 
[trym_shard2_replica1] webapp=/solr path=/select 
 
params={facet=true&start=0&q=*:*&distrib=true&collection=trym&facet.field=fieldA&wt=javabin&version=2&rows=0}
 
 status=500 QTime=20
{noformat}
One of the "internal query" send by Solr to its shard looks like
{noformat}
576783 [qtp170860084-19] INFO  org.apache.solr.core.SolrCore  ¦ 
[trym_shard1_replica1] webapp=/solr path=/select 
 
params={facet=true&distrib=false&collection=trym&wt=javabin&version=2&rows=0&NOW=1379855011787

   
&shard.url=192.168.56.1:8501/solr/trym_shard1_replica1/&df=text&fl=id,score&f.fieldA.facet.limit=160
   &start=0&q=*:*&facet.field=fieldA&isShard=true&fsv=true} 
 hits=1 status=500 QTime=2
{noformat}

The exception thrown by Solr is as follows
{noformat}
576784 [qtp170860084-17] ERROR org.apache.solr.servlet.SolrDispatchFilter  ¦ 
null:java.lang.IllegalStateException: 
 Cannot use facet.mincount=0 on a field which is not indexed
at 
org.apache.solr.request.NumericFacets.getCounts(NumericFacets.java:257)
at 
org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:423)
at 
org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:530)
at 
org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:259)
at 
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:78)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1904)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:659)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:362)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
   

[jira] [Updated] (SOLR-5260) Facet search on a docvalue field in a multi shard collection

2013-09-23 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-5260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Trym Møller updated SOLR-5260:
--

Description: 
I have a problem doing facet search on a doc value field in a multi shard 
collection.

My Solr schema specifies fieldA as a docvalue type and I have created a two 
shard collection using Solr 4.4.0 (and the unreleased 4.5 branch).
When I do a facet search on fieldA with a "large" facet.limit then the query 
fails with the below exception
A "large" facet.limit seems to be when (10 + (facet.limit * 1,5)) * number of 
shards > rows matching my query

The exception does not occur when I run with a single shard collection.
It can easily be reproduced by indexing a single row and querying it, as the 
default facet.limit is 100.

The facet query received by Solr looks as follows:
576793 [qtp170860084-18] INFO  org.apache.solr.core.SolrCore  ¦ 
[trym_shard2_replica1] webapp=/solr path=/select 
params={facet=true&start=0&q=*:*&distrib=true&collection=trym&facet.field=fieldA&wt=javabin&version=2&rows=0}
 status=500 QTime=20

One of the "internal query" send by Solr to its shard looks like
576783 [qtp170860084-19] INFO  org.apache.solr.core.SolrCore  ¦ 
[trym_shard1_replica1] webapp=/solr path=/select 
params={facet=true&distrib=false&collection=trym&wt=javabin&version=2&rows=0&NOW=1379855011787&shard.url=192.168.56.1:8501/solr/trym_shard1_replica1/&df=text&fl=id,score&f.fieldA.facet.limit=160&start=0&q=*:*&facet.field=fieldA&isShard=true&fsv=true}
 hits=1 status=500 QTime=2

576784 [qtp170860084-17] ERROR org.apache.solr.servlet.SolrDispatchFilter  ¦ 
null:java.lang.IllegalStateException: Cannot use facet.mincount=0 on a field 
which is not indexed
at 
org.apache.solr.request.NumericFacets.getCounts(NumericFacets.java:257)
at 
org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:423)
at 
org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:530)
at 
org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:259)
at 
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:78)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1904)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:659)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:362)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPoo

[jira] [Created] (SOLR-5260) Facet search on a docvalue field in a multi shard collection

2013-09-23 Thread JIRA
Trym Møller created SOLR-5260:
-

 Summary: Facet search on a docvalue field in a multi shard 
collection
 Key: SOLR-5260
 URL: https://issues.apache.org/jira/browse/SOLR-5260
 Project: Solr
  Issue Type: Bug
  Components: search, SolrCloud
Affects Versions: 4.4
Reporter: Trym Møller


I have a problem doing facet search on a doc value field in a multi shard 
collection.

My Solr schema specifies fieldA as a docvalue type and I have created a two 
shard collection using Solr 4.4.0 (and the unreleased 4.5 branch).
When I do a facet search on fieldA with a "large" facet.limit then the query 
fails with the below exception
A "large" facet.limit seems to be when (10 + (facet.limit * 1,5)) * number of 
shards > rows matching my query

The exception does not occur when I run with a single shard collection.
It can easily be reproduced by indexing a single row and querying it, as the 
default facet.limit is 100.

The facet query received by Solr looks as follows:
576793 [qtp170860084-18] INFO  org.apache.solr.core.SolrCore  ¦ 
[trym_shard2_replica1] webapp=/solr path=/select 
params={facet=true&start=0&q=*:*&distrib=true&collection=trym&facet.field=fieldA&wt=javabin&version=2&rows=0}
 status=500 QTime=20
One of the "internal query" send by Solr to its shard looks like
576783 [qtp170860084-19] INFO  org.apache.solr.core.SolrCore  ¦ 
[trym_shard1_replica1] webapp=/solr path=/select 
params={facet=true&distrib=false&collection=trym
 
&wt=javabin&version=2&rows=0&NOW=1379855011787&shard.url=192.168.56.1:8501/solr/trym_shard1_replica1/&df=text&fl=id,score&f.fieldA.facet.limit=160&start=0&q=*:
*&facet.field=fieldA&isShard=true&fsv=true} hits=1 status=500 QTime=2

576784 [qtp170860084-17] ERROR org.apache.solr.servlet.SolrDispatchFilter  ¦ 
null:java.lang.IllegalStateException: Cannot use facet.mincount=0 on a field 
which is not indexed
at 
org.apache.solr.request.NumericFacets.getCounts(NumericFacets.java:257)
at 
org.apache.solr.request.SimpleFacets.getTermCounts(SimpleFacets.java:423)
at 
org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:530)
at 
org.apache.solr.request.SimpleFacets.getFacetCounts(SimpleFacets.java:259)
at 
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:78)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:208)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1904)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:659)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:362)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:158)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketCon

[jira] [Updated] (LUCENE-5215) Add support for FieldInfos generation

2013-09-23 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5215:
---

Attachment: LUCENE-5215.patch

bq. could we remove SCR.fieldInfos entirely?

Done.

bq. I mean technically I guess it's an optimization

The problem is that it's a chicken-and-egg problem: SR needs to open the CFS in 
order to read the FieldInfos, but doesn't need to hold it open. SCR needs the 
CFS for reading the various formats, but we must read FIS before we init SCR.

Note that we only do this double-open in case we open a new SegReader, never 
when we share a reader (then, if we need to read FIS, it's always outside CFS, 
cause it must be gen'd). In that case, maybe it's not so bad to do this 
double-open? I put a TODO in the code for now.

> Add support for FieldInfos generation
> -
>
> Key: LUCENE-5215
> URL: https://issues.apache.org/jira/browse/LUCENE-5215
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5215.patch, LUCENE-5215.patch, LUCENE-5215.patch, 
> LUCENE-5215.patch, LUCENE-5215.patch, LUCENE-5215.patch
>
>
> In LUCENE-5189 we've identified few reasons to do that:
> # If you want to update docs' values of field 'foo', where 'foo' exists in 
> the index, but not in a specific segment (sparse DV), we cannot allow that 
> and have to throw a late UOE. If we could rewrite FieldInfos (with 
> generation), this would be possible since we'd also write a new generation of 
> FIS.
> # When we apply NDV updates, we call DVF.fieldsConsumer. Currently the 
> consumer isn't allowed to change FI.attributes because we cannot modify the 
> existing FIS. This is implicit however, and we silently ignore any modified 
> attributes. FieldInfos.gen will allow that too.
> The idea is to add to SIPC fieldInfosGen, add to each FieldInfo a dvGen and 
> add support for FIS generation in FieldInfosFormat, SegReader etc., like we 
> now do for DocValues. I'll work on a patch.
> Also on LUCENE-5189, Rob raised a concern about SegmentInfo.attributes that 
> have same limitation -- if a Codec modifies them, they are silently being 
> ignored, since we don't gen the .si files. I think we can easily solve that 
> by recording SI.attributes in SegmentInfos, so they are recorded per-commit. 
> But I think it should be handled in a separate issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5109) EliasFano value index

2013-09-23 Thread Paul Elschot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774514#comment-13774514
 ] 

Paul Elschot commented on LUCENE-5109:
--

Patch of 23 september: as announced yesterday.

I tried benchmarking with index divisor 128 instead of 256. It is indeed a 
little bit faster for far advanceTo operations.

I used this code snippet in the benchmark to avoid the EliasFanoDocIdSet being 
used when it is not advisable:

{code}
new DocIdSetFactory() {
  @Override
  public DocIdSet copyOf(FixedBitSet set) throws IOException {
long numValues = set.cardinality();
long upperBound = set.prevSetBit(set.length() - 1);
if (EliasFanoDocIdSet.sufficientlySmallerThanBitSet(numValues, 
upperBound)) {
  final EliasFanoDocIdSet copy = new EliasFanoDocIdSet(numValues, 
upperBound));
  copy.encodeFromDisi(set.iterator());
  return copy;
} else {
  return set;
}
  }
}
{code}

The sufficientlySmallerThanBitSet method currently checks for upperbound/7 > 
numValues.
That used to be a division by 6, I added 1 because the index was added.

Anyway, "advisable" will depend on better benchmarking than I can do...

> EliasFano value index
> -
>
> Key: LUCENE-5109
> URL: https://issues.apache.org/jira/browse/LUCENE-5109
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Reporter: Paul Elschot
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-5109.patch, LUCENE-5109.patch, LUCENE-5109.patch
>
>
> Index upper bits of Elias-Fano sequence.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5238) Fix junitcompat tests (so that they're not triggered when previous errors occur)

2013-09-23 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774513#comment-13774513
 ] 

Dawid Weiss commented on LUCENE-5238:
-

http://builds.flonkings.com/job/Lucene-trunk-Linux-Java7-64-test-only/58093/console
{code}
- 
org.apache.lucene.util.junitcompat.TestFailIfUnreferencedFiles.testFailIfUnreferencedFiles
- 
org.apache.lucene.util.junitcompat.TestFailIfDirectoryNotClosed.testFailIfDirectoryNotClosed
{code}


> Fix junitcompat tests (so that they're not triggered when previous errors 
> occur)
> 
>
> Key: LUCENE-5238
> URL: https://issues.apache.org/jira/browse/LUCENE-5238
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
>


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5238) Fix junitcompat tests (so that they're not triggered when previous errors occur)

2013-09-23 Thread Dawid Weiss (JIRA)
Dawid Weiss created LUCENE-5238:
---

 Summary: Fix junitcompat tests (so that they're not triggered when 
previous errors occur)
 Key: LUCENE-5238
 URL: https://issues.apache.org/jira/browse/LUCENE-5238
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Dawid Weiss
Assignee: Dawid Weiss
Priority: Trivial




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5109) EliasFano value index

2013-09-23 Thread Paul Elschot (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Paul Elschot updated LUCENE-5109:
-

Attachment: LUCENE-5109.patch

> EliasFano value index
> -
>
> Key: LUCENE-5109
> URL: https://issues.apache.org/jira/browse/LUCENE-5109
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Reporter: Paul Elschot
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-5109.patch, LUCENE-5109.patch, LUCENE-5109.patch
>
>
> Index upper bits of Elias-Fano sequence.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5237:
---

Attachment: LUCENE-5237.patch

Added asserts to both methods as well as \@lucene.internal. Also added a minor 
optimization to not call System.arraycopy if asked to delete the last 
character(s). All analysis tests pass.

> StemmerUtil.deleteN may delete too many characters
> --
>
> Key: LUCENE-5237
> URL: https://issues.apache.org/jira/browse/LUCENE-5237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5237.patch, LUCENE-5237.patch
>
>
> StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
> many characters. E.g. if you execute this code:
> {code}
> char[] buf = "abcd".toCharArray();
> int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
> System.out.println(new String(buf, 0, len));
> {code}
> You get "a", even though no character should have been deleted (not according 
> to the javadocs nor common logic).
> The problem is in delete(), which always returns {{len-1}}, even if no 
> character is actually deleted.
> I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-5259) Typo in error message from missing / wrong _version_ field

2013-09-23 Thread Benson Margulies (JIRA)
Benson Margulies created SOLR-5259:
--

 Summary: Typo in error message from missing / wrong _version_ field
 Key: SOLR-5259
 URL: https://issues.apache.org/jira/browse/SOLR-5259
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.4
Reporter: Benson Margulies


Note the missing space between _version_ and field.

Caused by: org.apache.solr.common.SolrException: Unable to use updateLog: 
_version_field must exist in schema, using indexed="true" stored="true" and 
multiValued="false" (_version_ is not indexed

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774485#comment-13774485
 ] 

Shai Erera commented on LUCENE-5237:


bq. You called delete. By definition, this shortens the string.

Yes, but not beyond what I've asked. If you call delete("abcd", pos=2, 
nChars=3), you get back "a". It's like pos is completely ignored.

I'll change the code to only assert that pos and nChars don't go beyond len.

> StemmerUtil.deleteN may delete too many characters
> --
>
> Key: LUCENE-5237
> URL: https://issues.apache.org/jira/browse/LUCENE-5237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5237.patch
>
>
> StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
> many characters. E.g. if you execute this code:
> {code}
> char[] buf = "abcd".toCharArray();
> int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
> System.out.println(new String(buf, 0, len));
> {code}
> You get "a", even though no character should have been deleted (not according 
> to the javadocs nor common logic).
> The problem is in delete(), which always returns {{len-1}}, even if no 
> character is actually deleted.
> I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5215) Add support for FieldInfos generation

2013-09-23 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774471#comment-13774471
 ] 

Michael McCandless commented on LUCENE-5215:


Hmm, could we remove SCR.fieldInfos entirely?  And, pass it into ctor (passing 
null if it's not "shared"), and also into .getNormValues (it's only 
SegmentReader that calls this)?

{quote}
bq. it looks like we are now double-opening the CFS file

Correct. It's also called from IndexWriter, which opened CFS if needed, 
therefore I thought it's not so critical.
{quote}

I'm less worried about IW, which does this once on open; apps "typically" open 
an IW and use it for a long time, opening many NRT readers from it.

But I don't like adding this double-open to SR's open path (SR's are 
"typically" opened more frequently than IWs), if we can help it.

I mean technically I guess it's an optimization to not double-open the CFS file 
... so we could instead open a follow-on issue to try to fix it.

> Add support for FieldInfos generation
> -
>
> Key: LUCENE-5215
> URL: https://issues.apache.org/jira/browse/LUCENE-5215
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: core/index
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5215.patch, LUCENE-5215.patch, LUCENE-5215.patch, 
> LUCENE-5215.patch, LUCENE-5215.patch
>
>
> In LUCENE-5189 we've identified few reasons to do that:
> # If you want to update docs' values of field 'foo', where 'foo' exists in 
> the index, but not in a specific segment (sparse DV), we cannot allow that 
> and have to throw a late UOE. If we could rewrite FieldInfos (with 
> generation), this would be possible since we'd also write a new generation of 
> FIS.
> # When we apply NDV updates, we call DVF.fieldsConsumer. Currently the 
> consumer isn't allowed to change FI.attributes because we cannot modify the 
> existing FIS. This is implicit however, and we silently ignore any modified 
> attributes. FieldInfos.gen will allow that too.
> The idea is to add to SIPC fieldInfosGen, add to each FieldInfo a dvGen and 
> add support for FIS generation in FieldInfosFormat, SegReader etc., like we 
> now do for DocValues. I'll work on a patch.
> Also on LUCENE-5189, Rob raised a concern about SegmentInfo.attributes that 
> have same limitation -- if a Codec modifies them, they are silently being 
> ignored, since we don't gen the .si files. I think we can easily solve that 
> by recording SI.attributes in SegmentInfos, so they are recorded per-commit. 
> But I think it should be handled in a separate issue.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 7486 - Failure!

2013-09-23 Thread Dawid Weiss
I will try.

Dawid

On Mon, Sep 23, 2013 at 1:00 PM, Robert Muir  wrote:
> There is zero chance of me trying to reproduce it: I'm on travel with
> mac only, and i only have 1.6.0_51.
>
> On Mon, Sep 23, 2013 at 3:40 AM, Dawid Weiss
>  wrote:
>> Oh, damn. Didn't notice THAT! Maybe it is something else... or the
>> machine (memory?) is failing?
>>
>> D.
>>
>> On Mon, Sep 23, 2013 at 12:36 PM, Robert Muir  wrote:
>>> On java 6?!?! Any hints as to why you think it might be the same bug?
>>>
>>> On Sep 23, 2013 2:17 AM, "Dawid Weiss"  wrote:

 It's probably https://bugs.openjdk.java.net/browse/JDK-8024830
 (LUCENE-5212).

[junit4] >>> JVM J1: stdout (verbatim) 
[junit4] #
[junit4] # A fatal error has been detected by the Java Runtime
 Environment:
[junit4] #
[junit4] #  SIGSEGV (0xb) at pc=0xf4af4f17, pid=11788, tid=3457669952
[junit4] #
[junit4] # JRE version: 6.0_45-b06
[junit4] # Java VM: Java HotSpot(TM) Client VM (20.45-b01 mixed
 mode linux-x86 )
[junit4] # Problematic frame:
[junit4] # J
 org.apache.lucene.store.TestDirectory.testDirectInstantiation()V
[junit4] #
[junit4] # An error report file with more information is saved as:
[junit4] #
 /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/J1/hs_err_pid11788.log
[junit4] #
[junit4] # If you would like to submit a bug report, please visit:
[junit4] #   http://java.sun.com/webapps/bugreport/crash.jsp
[junit4] #
[junit4] <<< JVM J1: EOF 

 On Sun, Sep 22, 2013 at 4:50 PM, Policeman Jenkins Server
  wrote:
 > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/7486/
 > Java: 32bit/jdk1.6.0_45 -client -XX:+UseConcMarkSweepGC
 >
 > All tests passed
 >
 > Build Log:
 > [...truncated 1268 lines...]
 >[junit4] ERROR: JVM J1 ended with an exception, command line:
 > /var/lib/jenkins/tools/java/32bit/jdk1.6.0_45/jre/bin/java -client
 > -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError
 > -XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/heapdumps
 > -Dtests.prefix=tests -Dtests.seed=4B7F292A927C08A -Xmx512M -Dtests.iters=
 > -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random
 > -Dtests.postingsformat=random -Dtests.docvaluesformat=random
 > -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random
 > -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.6
 > -Dtests.cleanthreads=perMethod
 > -Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/logging.properties
 > -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true
 > -Dtests.asserts.gracious=false -Dtests.multiplier=3 -DtempDir=.
 > -Djava.io.tmpdir=.
 > -Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/temp
 > -Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/clover/db
 > -Djava.security.manager=org.apache.lucene.util.TestSecurityManager
 > -Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/tests.policy
 > -Dlucene.version=4.6-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1
 > -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory
 > -Djava.awt.headless=true -Dtests.disableHdfs=true 
 > -Dfile.encoding=US-ASCII
 > -classpath
 > /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/test-framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/codecs/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/junit-4.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/randomizedtesting-runner-2.0.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/classes/test:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-launcher.jar:/var/lib/jenkins/.ant/lib/ivy-2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1

[jira] [Commented] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774467#comment-13774467
 ] 

Adrien Grand commented on LUCENE-5237:
--

bq. Maybe what we can do is add an assert that pos + nChars < len, and not 
silently delete less chars than you asked for? Is that better?

+1 for {{assert pos + nChars <= len;}}

> StemmerUtil.deleteN may delete too many characters
> --
>
> Key: LUCENE-5237
> URL: https://issues.apache.org/jira/browse/LUCENE-5237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5237.patch
>
>
> StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
> many characters. E.g. if you execute this code:
> {code}
> char[] buf = "abcd".toCharArray();
> int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
> System.out.println(new String(buf, 0, len));
> {code}
> You get "a", even though no character should have been deleted (not according 
> to the javadocs nor common logic).
> The problem is in delete(), which always returns {{len-1}}, even if no 
> character is actually deleted.
> I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774458#comment-13774458
 ] 

Robert Muir commented on LUCENE-5237:
-

{quote}
It's not that it does not remove a suffix, but that it removes a suffix you 
didn't ask for!
{quote}

You called delete. By *definition*, this shortens the string.

> StemmerUtil.deleteN may delete too many characters
> --
>
> Key: LUCENE-5237
> URL: https://issues.apache.org/jira/browse/LUCENE-5237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5237.patch
>
>
> StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
> many characters. E.g. if you execute this code:
> {code}
> char[] buf = "abcd".toCharArray();
> int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
> System.out.println(new String(buf, 0, len));
> {code}
> You get "a", even though no character should have been deleted (not according 
> to the javadocs nor common logic).
> The problem is in delete(), which always returns {{len-1}}, even if no 
> character is actually deleted.
> I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 7486 - Failure!

2013-09-23 Thread Robert Muir
There is zero chance of me trying to reproduce it: I'm on travel with
mac only, and i only have 1.6.0_51.

On Mon, Sep 23, 2013 at 3:40 AM, Dawid Weiss
 wrote:
> Oh, damn. Didn't notice THAT! Maybe it is something else... or the
> machine (memory?) is failing?
>
> D.
>
> On Mon, Sep 23, 2013 at 12:36 PM, Robert Muir  wrote:
>> On java 6?!?! Any hints as to why you think it might be the same bug?
>>
>> On Sep 23, 2013 2:17 AM, "Dawid Weiss"  wrote:
>>>
>>> It's probably https://bugs.openjdk.java.net/browse/JDK-8024830
>>> (LUCENE-5212).
>>>
>>>[junit4] >>> JVM J1: stdout (verbatim) 
>>>[junit4] #
>>>[junit4] # A fatal error has been detected by the Java Runtime
>>> Environment:
>>>[junit4] #
>>>[junit4] #  SIGSEGV (0xb) at pc=0xf4af4f17, pid=11788, tid=3457669952
>>>[junit4] #
>>>[junit4] # JRE version: 6.0_45-b06
>>>[junit4] # Java VM: Java HotSpot(TM) Client VM (20.45-b01 mixed
>>> mode linux-x86 )
>>>[junit4] # Problematic frame:
>>>[junit4] # J
>>> org.apache.lucene.store.TestDirectory.testDirectInstantiation()V
>>>[junit4] #
>>>[junit4] # An error report file with more information is saved as:
>>>[junit4] #
>>> /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/J1/hs_err_pid11788.log
>>>[junit4] #
>>>[junit4] # If you would like to submit a bug report, please visit:
>>>[junit4] #   http://java.sun.com/webapps/bugreport/crash.jsp
>>>[junit4] #
>>>[junit4] <<< JVM J1: EOF 
>>>
>>> On Sun, Sep 22, 2013 at 4:50 PM, Policeman Jenkins Server
>>>  wrote:
>>> > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/7486/
>>> > Java: 32bit/jdk1.6.0_45 -client -XX:+UseConcMarkSweepGC
>>> >
>>> > All tests passed
>>> >
>>> > Build Log:
>>> > [...truncated 1268 lines...]
>>> >[junit4] ERROR: JVM J1 ended with an exception, command line:
>>> > /var/lib/jenkins/tools/java/32bit/jdk1.6.0_45/jre/bin/java -client
>>> > -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError
>>> > -XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/heapdumps
>>> > -Dtests.prefix=tests -Dtests.seed=4B7F292A927C08A -Xmx512M -Dtests.iters=
>>> > -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random
>>> > -Dtests.postingsformat=random -Dtests.docvaluesformat=random
>>> > -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random
>>> > -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.6
>>> > -Dtests.cleanthreads=perMethod
>>> > -Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/logging.properties
>>> > -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true
>>> > -Dtests.asserts.gracious=false -Dtests.multiplier=3 -DtempDir=.
>>> > -Djava.io.tmpdir=.
>>> > -Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/temp
>>> > -Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/clover/db
>>> > -Djava.security.manager=org.apache.lucene.util.TestSecurityManager
>>> > -Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/tests.policy
>>> > -Dlucene.version=4.6-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1
>>> > -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory
>>> > -Djava.awt.headless=true -Dtests.disableHdfs=true -Dfile.encoding=US-ASCII
>>> > -classpath
>>> > /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/test-framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/codecs/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/junit-4.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/randomizedtesting-runner-2.0.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/classes/test:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-launcher.jar:/var/lib/jenkins/.ant/lib/ivy-2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-log4j.jar:/var/lib/jenkins/tools/hudson.task

[jira] [Updated] (SOLR-5258) router.field support for compositeId router

2013-09-23 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5258?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-5258:
-

Attachment: SOLR-5258.patch

> router.field support for compositeId router
> ---
>
> Key: SOLR-5258
> URL: https://issues.apache.org/jira/browse/SOLR-5258
> Project: Solr
>  Issue Type: New Feature
>Reporter: Yonik Seeley
>Assignee: Noble Paul
>Priority: Minor
> Attachments: SOLR-5258.patch
>
>
> Although there is code to support router.field for CompositeId, it only 
> calculates a simple (non-compound) hash, which isn't that useful unless you 
> don't use compound ids (this is why I changed the docs to say router.field is 
> only supported for the implicit router).  The field value should either
> - be used to calculate the full compound hash
> - be used to calculate the prefix bits, and the uniqueKey will still be used 
> for the lower bits.
> For consistency, I'd suggest the former.
> If we want to be able to specify a separate field that is only used for the 
> prefix bits, then perhaps that should be "router.prefixField"

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774446#comment-13774446
 ] 

Shai Erera commented on LUCENE-5237:


let me clarify something -- if the code that's using these methods is using 
them "properly", then the code works well. The bug is that it lets you delete 
characters you didn't intend to. Maybe what we can do is add an assert that 
{{pos + nChars < len}}, and not silently delete less chars than you asked for? 
Is that better?

> StemmerUtil.deleteN may delete too many characters
> --
>
> Key: LUCENE-5237
> URL: https://issues.apache.org/jira/browse/LUCENE-5237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5237.patch
>
>
> StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
> many characters. E.g. if you execute this code:
> {code}
> char[] buf = "abcd".toCharArray();
> int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
> System.out.println(new String(buf, 0, len));
> {code}
> You get "a", even though no character should have been deleted (not according 
> to the javadocs nor common logic).
> The problem is in delete(), which always returns {{len-1}}, even if no 
> character is actually deleted.
> I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 7486 - Failure!

2013-09-23 Thread Dawid Weiss
Oh, damn. Didn't notice THAT! Maybe it is something else... or the
machine (memory?) is failing?

D.

On Mon, Sep 23, 2013 at 12:36 PM, Robert Muir  wrote:
> On java 6?!?! Any hints as to why you think it might be the same bug?
>
> On Sep 23, 2013 2:17 AM, "Dawid Weiss"  wrote:
>>
>> It's probably https://bugs.openjdk.java.net/browse/JDK-8024830
>> (LUCENE-5212).
>>
>>[junit4] >>> JVM J1: stdout (verbatim) 
>>[junit4] #
>>[junit4] # A fatal error has been detected by the Java Runtime
>> Environment:
>>[junit4] #
>>[junit4] #  SIGSEGV (0xb) at pc=0xf4af4f17, pid=11788, tid=3457669952
>>[junit4] #
>>[junit4] # JRE version: 6.0_45-b06
>>[junit4] # Java VM: Java HotSpot(TM) Client VM (20.45-b01 mixed
>> mode linux-x86 )
>>[junit4] # Problematic frame:
>>[junit4] # J
>> org.apache.lucene.store.TestDirectory.testDirectInstantiation()V
>>[junit4] #
>>[junit4] # An error report file with more information is saved as:
>>[junit4] #
>> /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/J1/hs_err_pid11788.log
>>[junit4] #
>>[junit4] # If you would like to submit a bug report, please visit:
>>[junit4] #   http://java.sun.com/webapps/bugreport/crash.jsp
>>[junit4] #
>>[junit4] <<< JVM J1: EOF 
>>
>> On Sun, Sep 22, 2013 at 4:50 PM, Policeman Jenkins Server
>>  wrote:
>> > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/7486/
>> > Java: 32bit/jdk1.6.0_45 -client -XX:+UseConcMarkSweepGC
>> >
>> > All tests passed
>> >
>> > Build Log:
>> > [...truncated 1268 lines...]
>> >[junit4] ERROR: JVM J1 ended with an exception, command line:
>> > /var/lib/jenkins/tools/java/32bit/jdk1.6.0_45/jre/bin/java -client
>> > -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError
>> > -XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/heapdumps
>> > -Dtests.prefix=tests -Dtests.seed=4B7F292A927C08A -Xmx512M -Dtests.iters=
>> > -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random
>> > -Dtests.postingsformat=random -Dtests.docvaluesformat=random
>> > -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random
>> > -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.6
>> > -Dtests.cleanthreads=perMethod
>> > -Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/logging.properties
>> > -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true
>> > -Dtests.asserts.gracious=false -Dtests.multiplier=3 -DtempDir=.
>> > -Djava.io.tmpdir=.
>> > -Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/temp
>> > -Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/clover/db
>> > -Djava.security.manager=org.apache.lucene.util.TestSecurityManager
>> > -Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/tests.policy
>> > -Dlucene.version=4.6-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1
>> > -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory
>> > -Djava.awt.headless=true -Dtests.disableHdfs=true -Dfile.encoding=US-ASCII
>> > -classpath
>> > /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/test-framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/codecs/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/junit-4.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/randomizedtesting-runner-2.0.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/classes/test:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-launcher.jar:/var/lib/jenkins/.ant/lib/ivy-2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-log4j.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jsch.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-net.jar:/var/lib/

[jira] [Commented] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1377#comment-1377
 ] 

Shai Erera commented on LUCENE-5237:


bq. assertion would also be ok I think

ok that's better than an exception.

> StemmerUtil.deleteN may delete too many characters
> --
>
> Key: LUCENE-5237
> URL: https://issues.apache.org/jira/browse/LUCENE-5237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5237.patch
>
>
> StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
> many characters. E.g. if you execute this code:
> {code}
> char[] buf = "abcd".toCharArray();
> int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
> System.out.println(new String(buf, 0, len));
> {code}
> You get "a", even though no character should have been deleted (not according 
> to the javadocs nor common logic).
> The problem is in delete(), which always returns {{len-1}}, even if no 
> character is actually deleted.
> I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774443#comment-13774443
 ] 

Shai Erera commented on LUCENE-5237:


bq. I'd rather have some warning that the stemmer is "broken" / has some 
wierdness than for delete(N) to silently not remove a suffix, thats scary to me 
to just change!

It's not that it does not remove a suffix, but that it removes a suffix you 
didn't ask for!

> StemmerUtil.deleteN may delete too many characters
> --
>
> Key: LUCENE-5237
> URL: https://issues.apache.org/jira/browse/LUCENE-5237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5237.patch
>
>
> StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
> many characters. E.g. if you execute this code:
> {code}
> char[] buf = "abcd".toCharArray();
> int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
> System.out.println(new String(buf, 0, len));
> {code}
> You get "a", even though no character should have been deleted (not according 
> to the javadocs nor common logic).
> The problem is in delete(), which always returns {{len-1}}, even if no 
> character is actually deleted.
> I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774442#comment-13774442
 ] 

Shai Erera commented on LUCENE-5237:


bq. This isn't a bug: if you delete the last character, its all that must 
happen.

You're right. So first, this isn't what happens. If pos=3 and len=4 (delete the 
last character), it calls System.arraycopy (even in the patch I posted). This 
could be improved. Second, the problem is that it deletes the last character, 
even if pos >= length. I.e. you ask to delete the character beyond what is 
"valid" in that buffer. I can't believe there is a TokenFilter that relies on 
being able to delete characters beyond the length of the buffer as it knows.

bq. Shouldn't it throw an exception instead when pos + nChars > buf.length?

Maybe we should ...

bq. We can mark the whole class lucene.internal or copy the code of the methods 
to each class actually using them

You mean inline these methods?

> StemmerUtil.deleteN may delete too many characters
> --
>
> Key: LUCENE-5237
> URL: https://issues.apache.org/jira/browse/LUCENE-5237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5237.patch
>
>
> StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
> many characters. E.g. if you execute this code:
> {code}
> char[] buf = "abcd".toCharArray();
> int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
> System.out.println(new String(buf, 0, len));
> {code}
> You get "a", even though no character should have been deleted (not according 
> to the javadocs nor common logic).
> The problem is in delete(), which always returns {{len-1}}, even if no 
> character is actually deleted.
> I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-4.x-Linux (32bit/jdk1.6.0_45) - Build # 7486 - Failure!

2013-09-23 Thread Robert Muir
On java 6?!?! Any hints as to why you think it might be the same bug?
On Sep 23, 2013 2:17 AM, "Dawid Weiss"  wrote:

> It's probably https://bugs.openjdk.java.net/browse/JDK-8024830(LUCENE-5212).
>
>[junit4] >>> JVM J1: stdout (verbatim) 
>[junit4] #
>[junit4] # A fatal error has been detected by the Java Runtime
> Environment:
>[junit4] #
>[junit4] #  SIGSEGV (0xb) at pc=0xf4af4f17, pid=11788, tid=3457669952
>[junit4] #
>[junit4] # JRE version: 6.0_45-b06
>[junit4] # Java VM: Java HotSpot(TM) Client VM (20.45-b01 mixed
> mode linux-x86 )
>[junit4] # Problematic frame:
>[junit4] # J
> org.apache.lucene.store.TestDirectory.testDirectInstantiation()V
>[junit4] #
>[junit4] # An error report file with more information is saved as:
>[junit4] #
> /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/J1/hs_err_pid11788.log
>[junit4] #
>[junit4] # If you would like to submit a bug report, please visit:
>[junit4] #   http://java.sun.com/webapps/bugreport/crash.jsp
>[junit4] #
>[junit4] <<< JVM J1: EOF 
>
> On Sun, Sep 22, 2013 at 4:50 PM, Policeman Jenkins Server
>  wrote:
> > Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.x-Linux/7486/
> > Java: 32bit/jdk1.6.0_45 -client -XX:+UseConcMarkSweepGC
> >
> > All tests passed
> >
> > Build Log:
> > [...truncated 1268 lines...]
> >[junit4] ERROR: JVM J1 ended with an exception, command line:
> /var/lib/jenkins/tools/java/32bit/jdk1.6.0_45/jre/bin/java -client
> -XX:+UseConcMarkSweepGC -XX:+HeapDumpOnOutOfMemoryError
> -XX:HeapDumpPath=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/heapdumps
> -Dtests.prefix=tests -Dtests.seed=4B7F292A927C08A -Xmx512M -Dtests.iters=
> -Dtests.verbose=false -Dtests.infostream=false -Dtests.codec=random
> -Dtests.postingsformat=random -Dtests.docvaluesformat=random
> -Dtests.locale=random -Dtests.timezone=random -Dtests.directory=random
> -Dtests.linedocsfile=europarl.lines.txt.gz -Dtests.luceneMatchVersion=4.6
> -Dtests.cleanthreads=perMethod
> -Djava.util.logging.config.file=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/logging.properties
> -Dtests.nightly=false -Dtests.weekly=false -Dtests.slow=true
> -Dtests.asserts.gracious=false -Dtests.multiplier=3 -DtempDir=.
> -Djava.io.tmpdir=.
> -Djunit4.tempDir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/test/temp
> -Dclover.db.dir=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/clover/db
> -Djava.security.manager=org.apache.lucene.util.TestSecurityManager
> -Djava.security.policy=/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/tools/junit4/tests.policy
> -Dlucene.version=4.6-SNAPSHOT -Djetty.testMode=1 -Djetty.insecurerandom=1
> -Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory
> -Djava.awt.headless=true -Dtests.disableHdfs=true -Dfile.encoding=US-ASCII
> -classpath
> /mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/test-framework/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/codecs/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/junit-4.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/test-framework/lib/randomizedtesting-runner-2.0.10.jar:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/classes/java:/mnt/ssd/jenkins/workspace/Lucene-Solr-4.x-Linux/lucene/build/core/classes/test:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-launcher.jar:/var/lib/jenkins/.ant/lib/ivy-2.3.0.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jai.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-swing.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-oro.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jmf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-xalan2.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-javamail.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-resolver.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-testutil.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-logging.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-log4j.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-junit.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jsch.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-commons-net.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-apache-bsf.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-jdepend.jar:/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2/lib/ant-netrexx.jar:/var/lib/jenkins/

[jira] [Commented] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774428#comment-13774428
 ] 

Robert Muir commented on LUCENE-5237:
-

{quote}
Shouldn't it throw an exception instead when pos + nChars > buf.length?
{quote}

assertion would also be ok I think: the code using this should be passing in 
length coming from the term buffer and should "know what its doing", e.g. this 
isn't a String class.

either way: I'd rather have some warning that the stemmer is "broken" / has 
some wierdness than for delete(N) to silently not remove a suffix, thats scary 
to me to just change!


> StemmerUtil.deleteN may delete too many characters
> --
>
> Key: LUCENE-5237
> URL: https://issues.apache.org/jira/browse/LUCENE-5237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5237.patch
>
>
> StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
> many characters. E.g. if you execute this code:
> {code}
> char[] buf = "abcd".toCharArray();
> int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
> System.out.println(new String(buf, 0, len));
> {code}
> You get "a", even though no character should have been deleted (not according 
> to the javadocs nor common logic).
> The problem is in delete(), which always returns {{len-1}}, even if no 
> character is actually deleted.
> I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774419#comment-13774419
 ] 

Robert Muir commented on LUCENE-5237:
-

{quote}
The problem is in delete(), which always returns len-1, even if no character is 
actually deleted.
{quote}

This isn't a bug: if you delete the last character, its all that must happen.

> StemmerUtil.deleteN may delete too many characters
> --
>
> Key: LUCENE-5237
> URL: https://issues.apache.org/jira/browse/LUCENE-5237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5237.patch
>
>
> StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
> many characters. E.g. if you execute this code:
> {code}
> char[] buf = "abcd".toCharArray();
> int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
> System.out.println(new String(buf, 0, len));
> {code}
> You get "a", even though no character should have been deleted (not according 
> to the javadocs nor common logic).
> The problem is in delete(), which always returns {{len-1}}, even if no 
> character is actually deleted.
> I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5084) new field type - EnumField

2013-09-23 Thread Elran Dvir (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774412#comment-13774412
 ] 

Elran Dvir commented on SOLR-5084:
--

Hi Erick,
 
Thanks for the feedback.

1) I think your suggestion would be much better, especially if we can keep the 
syntax fairly compact. However I would like to separate that effort from this 
change (it might apply to CurrencyField and other use cases so might warrant a 
different issue)

1.5) gone in today's patch.

2)I'm not seeing 17 in the patch, I have ENUM_FIELD_VALUE as 18. I rechecked 
for the today's patch as well - not sure where the difference is coming from

3)I am not checking null value of this.max. I am checking null value of the 
parameter max of the function. So this.max should be set to non-null.
 
4) usage of isNullOrEmpty - again, I've removed the isNullOrEmpty code in the 
Sep 1st patch. Can't explain why you are still seeing it... I also have an 
override of storedToIndexed showing in my patch

5) there were inclusive range tests in the function testEnumRangeSearch. I have 
added now exclusive range tests in today's patch .

I have removed leading underscores from members in today's patch.

Thanks!

> new field type - EnumField
> --
>
> Key: SOLR-5084
> URL: https://issues.apache.org/jira/browse/SOLR-5084
> Project: Solr
>  Issue Type: New Feature
>Reporter: Elran Dvir
>Assignee: Erick Erickson
> Attachments: enumsConfig.xml, schema_example.xml, Solr-5084.patch, 
> Solr-5084.patch, Solr-5084.patch, Solr-5084.patch, Solr-5084.trunk.patch, 
> Solr-5084.trunk.patch, Solr-5084.trunk.patch
>
>
> We have encountered a use case in our system where we have a few fields 
> (Severity. Risk etc) with a closed set of values, where the sort order for 
> these values is pre-determined but not lexicographic (Critical is higher than 
> High). Generically this is very close to how enums work.
> To implement, I have prototyped a new type of field: EnumField where the 
> inputs are a closed predefined  set of strings in a special configuration 
> file (similar to currency.xml).
> The code is based on 4.2.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5084) new field type - EnumField

2013-09-23 Thread Elran Dvir (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elran Dvir updated SOLR-5084:
-

Attachment: Solr-5084.trunk.patch

> new field type - EnumField
> --
>
> Key: SOLR-5084
> URL: https://issues.apache.org/jira/browse/SOLR-5084
> Project: Solr
>  Issue Type: New Feature
>Reporter: Elran Dvir
>Assignee: Erick Erickson
> Attachments: enumsConfig.xml, schema_example.xml, Solr-5084.patch, 
> Solr-5084.patch, Solr-5084.patch, Solr-5084.patch, Solr-5084.trunk.patch, 
> Solr-5084.trunk.patch, Solr-5084.trunk.patch
>
>
> We have encountered a use case in our system where we have a few fields 
> (Severity. Risk etc) with a closed set of values, where the sort order for 
> these values is pre-determined but not lexicographic (Critical is higher than 
> High). Generically this is very close to how enums work.
> To implement, I have prototyped a new type of field: EnumField where the 
> inputs are a closed predefined  set of strings in a special configuration 
> file (similar to currency.xml).
> The code is based on 4.2.1.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774396#comment-13774396
 ] 

Robert Muir commented on LUCENE-5237:
-

I don't think we should change the semantics of these methods without tests 
(including relevance tests): lots of algorithms originally designed in e.g. C 
are using them.

We can mark the whole class lucene.internal or copy the code of the methods to 
each class actually using them (thats all ok to me).


> StemmerUtil.deleteN may delete too many characters
> --
>
> Key: LUCENE-5237
> URL: https://issues.apache.org/jira/browse/LUCENE-5237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5237.patch
>
>
> StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
> many characters. E.g. if you execute this code:
> {code}
> char[] buf = "abcd".toCharArray();
> int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
> System.out.println(new String(buf, 0, len));
> {code}
> You get "a", even though no character should have been deleted (not according 
> to the javadocs nor common logic).
> The problem is in delete(), which always returns {{len-1}}, even if no 
> character is actually deleted.
> I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774381#comment-13774381
 ] 

Adrien Grand commented on LUCENE-5237:
--

Shouldn't it throw an exception instead when pos + nChars > buf.length?

> StemmerUtil.deleteN may delete too many characters
> --
>
> Key: LUCENE-5237
> URL: https://issues.apache.org/jira/browse/LUCENE-5237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5237.patch
>
>
> StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
> many characters. E.g. if you execute this code:
> {code}
> char[] buf = "abcd".toCharArray();
> int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
> System.out.println(new String(buf, 0, len));
> {code}
> You get "a", even though no character should have been deleted (not according 
> to the javadocs nor common logic).
> The problem is in delete(), which always returns {{len-1}}, even if no 
> character is actually deleted.
> I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated LUCENE-5237:
---

Attachment: LUCENE-5237.patch

Fixed the bug, I also handled the TODO in deleteN.

> StemmerUtil.deleteN may delete too many characters
> --
>
> Key: LUCENE-5237
> URL: https://issues.apache.org/jira/browse/LUCENE-5237
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Reporter: Shai Erera
>Assignee: Shai Erera
> Attachments: LUCENE-5237.patch
>
>
> StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
> many characters. E.g. if you execute this code:
> {code}
> char[] buf = "abcd".toCharArray();
> int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
> System.out.println(new String(buf, 0, len));
> {code}
> You get "a", even though no character should have been deleted (not according 
> to the javadocs nor common logic).
> The problem is in delete(), which always returns {{len-1}}, even if no 
> character is actually deleted.
> I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5237) StemmerUtil.deleteN may delete too many characters

2013-09-23 Thread Shai Erera (JIRA)
Shai Erera created LUCENE-5237:
--

 Summary: StemmerUtil.deleteN may delete too many characters
 Key: LUCENE-5237
 URL: https://issues.apache.org/jira/browse/LUCENE-5237
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Reporter: Shai Erera
Assignee: Shai Erera


StemmerUtil.deleteN calls to delete(), but in some cases, it may delete too 
many characters. E.g. if you execute this code:

{code}
char[] buf = "abcd".toCharArray();
int len = StemmerUtil.deleteN(buf, buf.length, buf.length, 3);
System.out.println(new String(buf, 0, len));
{code}

You get "a", even though no character should have been deleted (not according 
to the javadocs nor common logic).

The problem is in delete(), which always returns {{len-1}}, even if no 
character is actually deleted.

I'll post a patch that fixes it shortly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org