Re: [VOTE] Release PyLucene 4.10.1-0

2014-09-30 Thread Aric Coady
I’ve found a regression involving Python* classes.  If the overridden methods 
raise an error, it’s causing a crash instead of propagating the error.  Here’s 
a simple example:

from org.apache.pylucene.search import PythonFilter
class Filter(PythonFilter):
Broken filter to test errors are raised.
def getDocIdSet(self, *args):
assert False

Run any search using an instance of that filter and it should reproduce.

On Sep 29, 2014, at 7:05 PM, Andi Vajda va...@apache.org wrote:

 
 The PyLucene 4.10.1-0 release tracking today's release of Apache Lucene 
 4.10.1 is ready.
 
 *** ATTENTION ***
 
 Starting with release 4.8.0, Lucene now requires Java 1.7 at the minimum.
 Using Java 1.6 with Lucene 4.8.0 and newer is not supported.
 
 On Mac OS X, Java 6 is still a common default, please upgrade if you haven't 
 done so already. A common upgrade is Oracle Java 1.7 for Mac OS X:
  http://docs.oracle.com/javase/7/docs/webnotes/install/mac/mac-jdk.html
 
 On Mac OS X, once installed, a way to make Java 1.7 the default in your bash 
 shell is:
  $ export JAVA_HOME=`/usr/libexec/java_home`
 Be sure to verify that this JAVA_HOME value is correct.
 
 On any system, if you're upgrading your Java installation, please rebuild
 JCC as well. You must use the same version of Java for both JCC and PyLucene.
 
 *** /ATTENTION ***
 
 
 A release candidate is available from:
 http://people.apache.org/~vajda/staging_area/
 
 A list of changes in this release can be seen at:
 http://svn.apache.org/repos/asf/lucene/pylucene/branches/pylucene_4_10/CHANGES
 
 PyLucene 4.10.1 is built with JCC 2.21 included in these release artifacts.
 
 A list of Lucene Java changes can be seen at:
 http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_10_1/lucene/CHANGES.txt
 
 Please vote to release these artifacts as PyLucene 4.10.1-0.
 Anyone interested in this release can and should vote !
 
 Thanks !
 
 Andi..
 
 ps: the KEYS file for PyLucene release signing is at:
 http://svn.apache.org/repos/asf/lucene/pylucene/dist/KEYS
 http://people.apache.org/~vajda/staging_area/KEYS
 
 pps: here is my +1



Re: [VOTE] Release PyLucene 4.10.1-0

2014-09-30 Thread Andi Vajda


On Tue, 30 Sep 2014, Aric Coady wrote:


I?ve found a regression involving Python* classes.  If the overridden methods 
raise an error, it?s causing a crash instead of propagating the error.  Here?s 
a simple example:

from org.apache.pylucene.search import PythonFilter
class Filter(PythonFilter):
   Broken filter to test errors are raised.
   def getDocIdSet(self, *args):
   assert False


I added the same 'assert False' line at line 69 in 
test/test_FilteredQuery.py and this test fails (as expected) but I get no 
crash.


Andi..



Run any search using an instance of that filter and it should reproduce.

On Sep 29, 2014, at 7:05 PM, Andi Vajda va...@apache.org wrote:



The PyLucene 4.10.1-0 release tracking today's release of Apache Lucene 4.10.1 
is ready.

*** ATTENTION ***

Starting with release 4.8.0, Lucene now requires Java 1.7 at the minimum.
Using Java 1.6 with Lucene 4.8.0 and newer is not supported.

On Mac OS X, Java 6 is still a common default, please upgrade if you haven't 
done so already. A common upgrade is Oracle Java 1.7 for Mac OS X:
 http://docs.oracle.com/javase/7/docs/webnotes/install/mac/mac-jdk.html

On Mac OS X, once installed, a way to make Java 1.7 the default in your bash 
shell is:
 $ export JAVA_HOME=`/usr/libexec/java_home`
Be sure to verify that this JAVA_HOME value is correct.

On any system, if you're upgrading your Java installation, please rebuild
JCC as well. You must use the same version of Java for both JCC and PyLucene.

*** /ATTENTION ***


A release candidate is available from:
http://people.apache.org/~vajda/staging_area/

A list of changes in this release can be seen at:
http://svn.apache.org/repos/asf/lucene/pylucene/branches/pylucene_4_10/CHANGES

PyLucene 4.10.1 is built with JCC 2.21 included in these release artifacts.

A list of Lucene Java changes can be seen at:
http://svn.apache.org/repos/asf/lucene/dev/tags/lucene_solr_4_10_1/lucene/CHANGES.txt

Please vote to release these artifacts as PyLucene 4.10.1-0.
Anyone interested in this release can and should vote !

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
http://svn.apache.org/repos/asf/lucene/pylucene/dist/KEYS
http://people.apache.org/~vajda/staging_area/KEYS

pps: here is my +1






[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_20) - Build # 11354 - Failure!

2014-09-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11354/
Java: 64bit/jdk1.8.0_20 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
REGRESSION:  org.apache.solr.core.ExitableDirectoryReaderTest.testPrefixQuery

Error Message:


Stack Trace:
java.lang.AssertionError: 
at 
__randomizedtesting.SeedInfo.seed([382EAA1F1F5A32C9:8B799E5D7B87D64C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.SolrTestCaseJ4.assertQEx(SolrTestCaseJ4.java:850)
at 
org.apache.solr.core.ExitableDirectoryReaderTest.testPrefixQuery(ExitableDirectoryReaderTest.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


REGRESSION:  
org.apache.solr.core.ExitableDirectoryReaderTest.testQueriesOnDocsWithMultipleTerms

Error Message:


Stack Trace:

[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.9.0-ea-b28) - Build # 11202 - Still Failing!

2014-09-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11202/
Java: 64bit/jdk1.9.0-ea-b28 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
REGRESSION:  org.apache.solr.core.ExitableDirectoryReaderTest.testPrefixQuery

Error Message:


Stack Trace:
java.lang.AssertionError: 
at 
__randomizedtesting.SeedInfo.seed([9B1CCC34EB011EB2:284BF8768FDCFA37]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.SolrTestCaseJ4.assertQEx(SolrTestCaseJ4.java:850)
at 
org.apache.solr.core.ExitableDirectoryReaderTest.testPrefixQuery(ExitableDirectoryReaderTest.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:484)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
org.apache.solr.cloud.DeleteLastCustomShardedReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle 

[jira] [Commented] (LUCENE-5978) don't write a norm of infinity when analyzer returns no tokens

2014-09-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153001#comment-14153001
 ] 

Michael McCandless commented on LUCENE-5978:


+1

 don't write a norm of infinity when analyzer returns no tokens
 --

 Key: LUCENE-5978
 URL: https://issues.apache.org/jira/browse/LUCENE-5978
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5978.patch


 When a document doesn't have the field, we fill with zero. when a segment 
 doesn't have the field, we also fill with zero.
 however, when the analyzer doesn't return any terms for the field, we still 
 call similarity.computeNorm(0)... with the default similarity this encodes 
 infinity... -1
 in such a case, it doesnt really matter what the norm is, since it has no 
 terms. But its more efficient for e.g. compression if we consistently use 
 zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6572) lineshift in solrconfig.xml is not supported

2014-09-30 Thread Fredrik Rodland (JIRA)
Fredrik Rodland created SOLR-6572:
-

 Summary: lineshift in solrconfig.xml is not supported
 Key: SOLR-6572
 URL: https://issues.apache.org/jira/browse/SOLR-6572
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.8.1
Reporter: Fredrik Rodland


This has been a problem for a long time, and is still a problem at least for 
SOLR 4.8.1.

If lineshifts are introduced in some elements in solrconfig.xml SOLR fails to 
pick up on the values.

example:
ok:
{code}
requestHandler name=/replication class=solr.ReplicationHandler 
enable=${enable.replication:false}
lst name=slave
str 
name=masterUrl${solr.master.url:http://solr-admin1.finn.no:12910/solr/front-static/replication}/str
{code}

not ok:
{code}
requestHandler name=/replication class=solr.ReplicationHandler 
enable=${enable.replication:false}
lst name=slave
str 
name=masterUrl${solr.master.url:http://solr-admin1.finn.no:12910/solr/front-static/replication}
/str
{code}

Other example:
ok:
{code}
str 
name=shardslocalhost:12100/solr,localhost:12200/solr,localhost:12300/solr,localhost:12400/solr,localhost:12500/solr,localhost:12530/solr/str
{code}

not ok:
{code}
str name=shards
localhost:12100/solr,localhost:12200/solr,localhost:12300/solr,localhost:12400/solr,localhost:12500/solr,localhost:12530/solr
   /str
{code}

IDEs and people tend to introduce lineshifts in xml-files to make them 
prettyer.  SOLR should really not be affected by this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153017#comment-14153017
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1628382 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1628382 ]

LUCENE-5969: current state for dv/norms

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Early Access builds for JDK 9 b32 and JDK 8u40 b07 are available on java.net

2014-09-30 Thread Rory O'Donnell Oracle, Dublin Ireland

Hi Uwe  Dawid,

Early Access build for JDK 9 b32 https://jdk9.java.net/download/  is 
available on java.net, summary of changes are listed here 
http://www.java.net/download/jdk9/changes/jdk9-b32.html


Early Access build for JDK 8u40 b07 http://jdk8.java.net/download.html 
is available on java.net, summary of changes are listed here. 
http://www.java.net/download/jdk8u40/changes/jdk8u40-b07.html


Rgds,Rory

--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



Early Access builds for JDK 9 b32 and JDK 8u40 b07 are available on java.net

2014-09-30 Thread Rory O'Donnell Oracle, Dublin Ireland

Hi Stefan,

Early Access build for JDK 9 b32 https://jdk9.java.net/download/  is 
available on java.net, summary of changes are listed here 
http://www.java.net/download/jdk9/changes/jdk9-b32.html


Early Access build for JDK 8u40 b07 http://jdk8.java.net/download.html 
is available on java.net, summary of changes are listed here. 
http://www.java.net/download/jdk8u40/changes/jdk8u40-b07.html


Rgds,Rory

--
Rgds,Rory O'Donnell
Quality Engineering Manager
Oracle EMEA , Dublin, Ireland



[jira] [Created] (SOLR-6573) Query elevation fails when localParams are used

2014-09-30 Thread Radek Urbas (JIRA)
Radek Urbas created SOLR-6573:
-

 Summary: Query elevation fails when localParams are used
 Key: SOLR-6573
 URL: https://issues.apache.org/jira/browse/SOLR-6573
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.1
Reporter: Radek Urbas


Elevation does not work when localParams are specified.
In example collection1 shipped with Solr query like this one 
http://localhost:8983/solr/collection1/elevate?q=ipodfl=id,title,[elevated] 
properly returns elevated documents on top.

If localParams are specified e.g. {!q.op=AND} in query like 
http://localhost:8983/solr/collection1/elevate?q={!q.op=AND}ipodfl=id,title,[elevated]
documents are not elevated anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6573) Query elevation fails when localParams are used

2014-09-30 Thread Radek Urbas (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Radek Urbas updated SOLR-6573:
--
Description: 
Elevation does not work when localParams are specified.
In example collection1 shipped with Solr query like this one 
{code}http://localhost:8983/solr/collection1/elevate?q=ipodfl=id,title,[elevated]{code}
 properly returns elevated documents on top.

If localParams are specified e.g. {!q.op=AND} in query like 
{code}http://localhost:8983/solr/collection1/elevate?q=\{!q.op=AND\}ipodfl=id,title,[elevated]{code}
documents are not elevated anymore.

  was:
Elevation does not work when localParams are specified.
In example collection1 shipped with Solr query like this one 
http://localhost:8983/solr/collection1/elevate?q=ipodfl=id,title,[elevated] 
properly returns elevated documents on top.

If localParams are specified e.g. {!q.op=AND} in query like 
http://localhost:8983/solr/collection1/elevate?q={!q.op=AND}ipodfl=id,title,[elevated]
documents are not elevated anymore.


 Query elevation fails when localParams are used
 ---

 Key: SOLR-6573
 URL: https://issues.apache.org/jira/browse/SOLR-6573
 Project: Solr
  Issue Type: Bug
  Components: search
Affects Versions: 4.10.1
Reporter: Radek Urbas

 Elevation does not work when localParams are specified.
 In example collection1 shipped with Solr query like this one 
 {code}http://localhost:8983/solr/collection1/elevate?q=ipodfl=id,title,[elevated]{code}
  properly returns elevated documents on top.
 If localParams are specified e.g. {!q.op=AND} in query like 
 {code}http://localhost:8983/solr/collection1/elevate?q=\{!q.op=AND\}ipodfl=id,title,[elevated]{code}
 documents are not elevated anymore.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5938) New DocIdSet implementation with random write access

2014-09-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153036#comment-14153036
 ] 

Michael McCandless commented on LUCENE-5938:


Thanks [~jpountz], new patch looks great!  +1 to commit.  Thank you for 
explaining that test failure!

 New DocIdSet implementation with random write access
 

 Key: LUCENE-5938
 URL: https://issues.apache.org/jira/browse/LUCENE-5938
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
 Attachments: LUCENE-5938.patch, LUCENE-5938.patch, LUCENE-5938.patch, 
 LUCENE-5938.patch, LUCENE-5938.patch, low_freq.tasks


 We have a great cost API that is supposed to help make decisions about how to 
 best execute queries. However, due to the fact that several of our filter 
 implementations (eg. TermsFilter and BooleanFilter) return FixedBitSets, 
 either we use the cost API and make bad decisions, or need to fall back to 
 heuristics which are not as good such as 
 RandomAccessFilterStrategy.useRandomAccess which decides that random access 
 should be used if the first doc in the set is less than 100.
 On the other hand, we also have some nice compressed and cacheable DocIdSet 
 implementation but we cannot make use of them because TermsFilter requires a 
 DocIdSet that has random write access, and FixedBitSet is the only DocIdSet 
 that we have that supports random access.
 I think it would be nice to replace FixedBitSet in those filters with another 
 DocIdSet that would also support random write access but would have a better 
 cost?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153046#comment-14153046
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1628386 from [~rcmuir] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1628386 ]

LUCENE-5969: add segment suffix safety

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5938) New DocIdSet implementation with random write access

2014-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153100#comment-14153100
 ] 

ASF subversion and git services commented on LUCENE-5938:
-

Commit 1628402 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1628402 ]

LUCENE-5938: Add a new sparse fixed bit set and remove ConstantScoreAutoRewrite.

 New DocIdSet implementation with random write access
 

 Key: LUCENE-5938
 URL: https://issues.apache.org/jira/browse/LUCENE-5938
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
 Attachments: LUCENE-5938.patch, LUCENE-5938.patch, LUCENE-5938.patch, 
 LUCENE-5938.patch, LUCENE-5938.patch, low_freq.tasks


 We have a great cost API that is supposed to help make decisions about how to 
 best execute queries. However, due to the fact that several of our filter 
 implementations (eg. TermsFilter and BooleanFilter) return FixedBitSets, 
 either we use the cost API and make bad decisions, or need to fall back to 
 heuristics which are not as good such as 
 RandomAccessFilterStrategy.useRandomAccess which decides that random access 
 should be used if the first doc in the set is less than 100.
 On the other hand, we also have some nice compressed and cacheable DocIdSet 
 implementation but we cannot make use of them because TermsFilter requires a 
 DocIdSet that has random write access, and FixedBitSet is the only DocIdSet 
 that we have that supports random access.
 I think it would be nice to replace FixedBitSet in those filters with another 
 DocIdSet that would also support random write access but would have a better 
 cost?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5938) New DocIdSet implementation with random write access

2014-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153101#comment-14153101
 ] 

ASF subversion and git services commented on LUCENE-5938:
-

Commit 1628406 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1628406 ]

LUCENE-5938: Add a new sparse fixed bit set and remove ConstantScoreAutoRewrite.

 New DocIdSet implementation with random write access
 

 Key: LUCENE-5938
 URL: https://issues.apache.org/jira/browse/LUCENE-5938
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
 Attachments: LUCENE-5938.patch, LUCENE-5938.patch, LUCENE-5938.patch, 
 LUCENE-5938.patch, LUCENE-5938.patch, low_freq.tasks


 We have a great cost API that is supposed to help make decisions about how to 
 best execute queries. However, due to the fact that several of our filter 
 implementations (eg. TermsFilter and BooleanFilter) return FixedBitSets, 
 either we use the cost API and make bad decisions, or need to fall back to 
 heuristics which are not as good such as 
 RandomAccessFilterStrategy.useRandomAccess which decides that random access 
 should be used if the first doc in the set is less than 100.
 On the other hand, we also have some nice compressed and cacheable DocIdSet 
 implementation but we cannot make use of them because TermsFilter requires a 
 DocIdSet that has random write access, and FixedBitSet is the only DocIdSet 
 that we have that supports random access.
 I think it would be nice to replace FixedBitSet in those filters with another 
 DocIdSet that would also support random write access but would have a better 
 cost?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5938) New DocIdSet implementation with random write access

2014-09-30 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5938?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-5938.
--
   Resolution: Fixed
Fix Version/s: 5.0

Thanks Mike!

 New DocIdSet implementation with random write access
 

 Key: LUCENE-5938
 URL: https://issues.apache.org/jira/browse/LUCENE-5938
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
 Fix For: 5.0

 Attachments: LUCENE-5938.patch, LUCENE-5938.patch, LUCENE-5938.patch, 
 LUCENE-5938.patch, LUCENE-5938.patch, low_freq.tasks


 We have a great cost API that is supposed to help make decisions about how to 
 best execute queries. However, due to the fact that several of our filter 
 implementations (eg. TermsFilter and BooleanFilter) return FixedBitSets, 
 either we use the cost API and make bad decisions, or need to fall back to 
 heuristics which are not as good such as 
 RandomAccessFilterStrategy.useRandomAccess which decides that random access 
 should be used if the first doc in the set is less than 100.
 On the other hand, we also have some nice compressed and cacheable DocIdSet 
 implementation but we cannot make use of them because TermsFilter requires a 
 DocIdSet that has random write access, and FixedBitSet is the only DocIdSet 
 that we have that supports random access.
 I think it would be nice to replace FixedBitSet in those filters with another 
 DocIdSet that would also support random write access but would have a better 
 cost?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5979) Use the cost API instead of a heuristic on the first document in FilteredQuery to decide on whether to use random access

2014-09-30 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-5979:


 Summary: Use the cost API instead of a heuristic on the first 
document in FilteredQuery to decide on whether to use random access
 Key: LUCENE-5979
 URL: https://issues.apache.org/jira/browse/LUCENE-5979
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0


Now that some major filters such as TermsFilter and MultiTermQueryWrapperFilter 
return DocIdSets that have a better cost, we should switch to the cost API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5979) Use the cost API instead of a heuristic on the first document in FilteredQuery to decide on whether to use random access

2014-09-30 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-5979:
-
Attachment: LUCENE-5979.patch

Here is a simple patch. I imagine the idea of {{firstFilterDoc  100}} was to 
use random-access when the filter matches more than 1% of documents, so I 
translated it to {{filterCost * 100  maxDoc}}.

 Use the cost API instead of a heuristic on the first document in 
 FilteredQuery to decide on whether to use random access
 

 Key: LUCENE-5979
 URL: https://issues.apache.org/jira/browse/LUCENE-5979
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5979.patch


 Now that some major filters such as TermsFilter and 
 MultiTermQueryWrapperFilter return DocIdSets that have a better cost, we 
 should switch to the cost API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5979) Use the cost API instead of a heuristic on the first document in FilteredQuery to decide on whether to use random access

2014-09-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153152#comment-14153152
 ] 

Robert Muir commented on LUCENE-5979:
-

I like the idea of using a better heuristic here, I guess my question is if 
interpreting the cost method as an absolute value (versus just using it as a 
relative one) is safe?

 Use the cost API instead of a heuristic on the first document in 
 FilteredQuery to decide on whether to use random access
 

 Key: LUCENE-5979
 URL: https://issues.apache.org/jira/browse/LUCENE-5979
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0

 Attachments: LUCENE-5979.patch


 Now that some major filters such as TermsFilter and 
 MultiTermQueryWrapperFilter return DocIdSets that have a better cost, we 
 should switch to the cost API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153156#comment-14153156
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1628439 from [~mikemccand] in branch 'dev/branches/lucene5969'
[ https://svn.apache.org/r1628439 ]

LUCENE-5969: fix nocommit

 Add Lucene50Codec
 -

 Key: LUCENE-5969
 URL: https://issues.apache.org/jira/browse/LUCENE-5969
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5969.patch, LUCENE-5969.patch


 Spinoff from LUCENE-5952:
   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
 read time.
   * Lucene42TermVectorsFormat should not use the same codecName as 
 Lucene41StoredFieldsFormat
 It would also be nice if we had a bumpCodecVersion script so rolling a new 
 codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-5x-Linux-Java7-64-test-only - Build # 32308 - Failure!

2014-09-30 Thread builder
Build: builds.flonkings.com/job/Lucene-5x-Linux-Java7-64-test-only/32308/

1 tests failed.
REGRESSION:  org.apache.lucene.index.TestReaderClosed.test

Error Message:
Task java.util.concurrent.ExecutorCompletionService$QueueingFuture@27a64b69 
rejected from java.util.concurrent.ThreadPoolExecutor@16a0eed5[Terminated, pool 
size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]

Stack Trace:
java.util.concurrent.RejectedExecutionException: Task 
java.util.concurrent.ExecutorCompletionService$QueueingFuture@27a64b69 rejected 
from java.util.concurrent.ThreadPoolExecutor@16a0eed5[Terminated, pool size = 
0, active threads = 0, queued tasks = 0, completed tasks = 1]
at 
__randomizedtesting.SeedInfo.seed([39D86455539BDAA0:B18C5B8FFD67B758]:0)
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
at 
java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:181)
at 
org.apache.lucene.search.IndexSearcher$ExecutionHelper.submit(IndexSearcher.java:823)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:447)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:273)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:261)
at 
org.apache.lucene.index.TestReaderClosed.test(TestReaderClosed.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[jira] [Commented] (LUCENE-740) Bugs in contrib/snowball/.../SnowballProgram.java - Kraaij-Pohlmann gives Index-OOB Exception

2014-09-30 Thread Fergal Monaghan (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153177#comment-14153177
 ] 

Fergal Monaghan commented on LUCENE-740:


Hi all, I've read this thread but am still unclear as to why this was marked 
Won't fix, which I guess means the patch fix supplied has not been applied. 
Can someone please clarify if and why this is the case? Regardless, I am still 
seeing this issue. In my pom I am using:
dependency
 groupIdorg.cleartk/groupId
 artifactIdcleartk-snowball/artifactId
 version2.0.0/version
/dependency

Which in the effective POM resolves the dependency to lucene-snowball (and 
therefore this error):
 groupIdorg.apache.lucene/groupId
 artifactIdlucene-snowball/artifactId
 version3.0.3/version

Thanks very much,
Fergal.

 Bugs in contrib/snowball/.../SnowballProgram.java - Kraaij-Pohlmann gives 
 Index-OOB Exception
 --

 Key: LUCENE-740
 URL: https://issues.apache.org/jira/browse/LUCENE-740
 Project: Lucene - Core
  Issue Type: Bug
  Components: modules/analysis
Affects Versions: 1.9
 Environment: linux amd64
Reporter: Andreas Kohn
Priority: Minor
 Attachments: 740-license.txt, lucene-1.9.1-SnowballProgram.java, 
 snowball.patch.txt


 (copied from mail to java-user)
 while playing with the various stemmers of Lucene(-1.9.1), I got an
 index out of bounds exception:
 lucene-1.9.1java -cp
 build/contrib/snowball/lucene-snowball-1.9.2-dev.jar
 net.sf.snowball.TestApp Kp bla.txt
 Exception in thread main java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64)
at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:615)
at net.sf.snowball.TestApp.main(TestApp.java:56)
 Caused by: java.lang.StringIndexOutOfBoundsException: String index out
 of range: 11
at java.lang.StringBuffer.charAt(StringBuffer.java:303)
at 
 net.sf.snowball.SnowballProgram.find_among_b(SnowballProgram.java:270)
at net.sf.snowball.ext.KpStemmer.r_Step_4(KpStemmer.java:1122)
at net.sf.snowball.ext.KpStemmer.stem(KpStemmer.java:1997)
 This happens when executing
 lucene-1.9.1java -cp
 build/contrib/snowball/lucene-snowball-1.9.2-dev.jar
 net.sf.snowball.TestApp Kp bla.txt
 bla.txt contains just this word: 'spijsvertering'.
 After some debugging, and some tests with the original snowball
 distribution from snowball.tartarus.org, it seems that the attached
 change is needed to avoid the exception.
 (The change comes from tartarus' SnowballProgram.java)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-5x-Linux-Java7-64-test-only - Build # 32308 - Failure!

2014-09-30 Thread Adrien Grand
I'm looking into this one.

On Tue, Sep 30, 2014 at 3:54 PM,  buil...@flonkings.com wrote:
 Build: builds.flonkings.com/job/Lucene-5x-Linux-Java7-64-test-only/32308/

 1 tests failed.
 REGRESSION:  org.apache.lucene.index.TestReaderClosed.test

 Error Message:
 Task java.util.concurrent.ExecutorCompletionService$QueueingFuture@27a64b69 
 rejected from java.util.concurrent.ThreadPoolExecutor@16a0eed5[Terminated, 
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]

 Stack Trace:
 java.util.concurrent.RejectedExecutionException: Task 
 java.util.concurrent.ExecutorCompletionService$QueueingFuture@27a64b69 
 rejected from java.util.concurrent.ThreadPoolExecutor@16a0eed5[Terminated, 
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]
 at 
 __randomizedtesting.SeedInfo.seed([39D86455539BDAA0:B18C5B8FFD67B758]:0)
 at 
 java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
 at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
 at 
 java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
 at 
 java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:181)
 at 
 org.apache.lucene.search.IndexSearcher$ExecutionHelper.submit(IndexSearcher.java:823)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:447)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:273)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:261)
 at 
 org.apache.lucene.index.TestReaderClosed.test(TestReaderClosed.java:67)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 

Re: [JENKINS] Lucene-5x-Linux-Java7-64-test-only - Build # 32308 - Failure!

2014-09-30 Thread Adrien Grand
Similar issue to the one that Mike found yesterday which is due to the
fact that the index is not consulted in the rewrite phase. We get a
rejection because LuceneTestCase registers an index reader close
listener that closes the threadpool so I modified this test to pass
when a rejected exception is thrown.

On Tue, Sep 30, 2014 at 4:01 PM, Adrien Grand jpou...@gmail.com wrote:
 I'm looking into this one.

 On Tue, Sep 30, 2014 at 3:54 PM,  buil...@flonkings.com wrote:
 Build: builds.flonkings.com/job/Lucene-5x-Linux-Java7-64-test-only/32308/

 1 tests failed.
 REGRESSION:  org.apache.lucene.index.TestReaderClosed.test

 Error Message:
 Task java.util.concurrent.ExecutorCompletionService$QueueingFuture@27a64b69 
 rejected from java.util.concurrent.ThreadPoolExecutor@16a0eed5[Terminated, 
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]

 Stack Trace:
 java.util.concurrent.RejectedExecutionException: Task 
 java.util.concurrent.ExecutorCompletionService$QueueingFuture@27a64b69 
 rejected from java.util.concurrent.ThreadPoolExecutor@16a0eed5[Terminated, 
 pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 1]
 at 
 __randomizedtesting.SeedInfo.seed([39D86455539BDAA0:B18C5B8FFD67B758]:0)
 at 
 java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2048)
 at 
 java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821)
 at 
 java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372)
 at 
 java.util.concurrent.ExecutorCompletionService.submit(ExecutorCompletionService.java:181)
 at 
 org.apache.lucene.search.IndexSearcher$ExecutionHelper.submit(IndexSearcher.java:823)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:447)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:273)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:261)
 at 
 org.apache.lucene.index.TestReaderClosed.test(TestReaderClosed.java:67)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at 
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at 
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at 
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at 
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
 at 
 com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
 at 
 com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
 at 
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at 
 org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
 at 
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at 
 com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
 at 
 

[jira] [Updated] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2014-09-30 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5879:
---
Attachment: LUCENE-5879.patch

New patch, just resolving conflicts from recent trunk changes ...

I have an idea on improving CheckIndex to ferret out these auto prefix terms 
 I'll explore it.

 Add auto-prefix terms to block tree terms dict
 --

 Key: LUCENE-5879
 URL: https://issues.apache.org/jira/browse/LUCENE-5879
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
 LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
 LUCENE-5879.patch


 This cool idea to generalize numeric/trie fields came from Adrien:
 Today, when we index a numeric field (LongField, etc.) we pre-compute
 (via NumericTokenStream) outside of indexer/codec which prefix terms
 should be indexed.
 But this can be inefficient: you set a static precisionStep, and
 always add those prefix terms regardless of how the terms in the field
 are actually distributed.  Yet typically in real world applications
 the terms have a non-random distribution.
 So, it should be better if instead the terms dict decides where it
 makes sense to insert prefix terms, based on how dense the terms are
 in each region of term space.
 This way we can speed up query time for both term (e.g. infix
 suggester) and numeric ranges, and it should let us use less index
 space and get faster range queries.
  
 This would also mean that min/maxTerm for a numeric field would now be
 correct, vs today where the externally computed prefix terms are
 placed after the full precision terms, causing hairy code like
 NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
 feasible.
 The terms dict can also do tricks not possible if you must live on top
 of its APIs, e.g. to handle the adversary/over-constrained case when a
 given prefix has too many terms following it but finer prefixes
 have too few (what block tree calls floor term blocks).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_40-ea-b04) - Build # 4243 - Failure!

2014-09-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4243/
Java: 32bit/jdk1.8.0_40-ea-b04 -client -XX:+UseG1GC

3 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
core: halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([9C7BCC65E760F4D1:1D9D427D903F94ED]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:568)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-5978) don't write a norm of infinity when analyzer returns no tokens

2014-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153247#comment-14153247
 ] 

ASF subversion and git services commented on LUCENE-5978:
-

Commit 1628463 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1628463 ]

LUCENE-5978: don't write a norm of infinity when analyzer returns no tokens

 don't write a norm of infinity when analyzer returns no tokens
 --

 Key: LUCENE-5978
 URL: https://issues.apache.org/jira/browse/LUCENE-5978
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5978.patch


 When a document doesn't have the field, we fill with zero. when a segment 
 doesn't have the field, we also fill with zero.
 however, when the analyzer doesn't return any terms for the field, we still 
 call similarity.computeNorm(0)... with the default similarity this encodes 
 infinity... -1
 in such a case, it doesnt really matter what the norm is, since it has no 
 terms. But its more efficient for e.g. compression if we consistently use 
 zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5978) don't write a norm of infinity when analyzer returns no tokens

2014-09-30 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153267#comment-14153267
 ] 

Uwe Schindler commented on LUCENE-5978:
---

+1

 don't write a norm of infinity when analyzer returns no tokens
 --

 Key: LUCENE-5978
 URL: https://issues.apache.org/jira/browse/LUCENE-5978
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5978.patch


 When a document doesn't have the field, we fill with zero. when a segment 
 doesn't have the field, we also fill with zero.
 however, when the analyzer doesn't return any terms for the field, we still 
 call similarity.computeNorm(0)... with the default similarity this encodes 
 infinity... -1
 in such a case, it doesnt really matter what the norm is, since it has no 
 terms. But its more efficient for e.g. compression if we consistently use 
 zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5978) don't write a norm of infinity when analyzer returns no tokens

2014-09-30 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5978.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

 don't write a norm of infinity when analyzer returns no tokens
 --

 Key: LUCENE-5978
 URL: https://issues.apache.org/jira/browse/LUCENE-5978
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5978.patch


 When a document doesn't have the field, we fill with zero. when a segment 
 doesn't have the field, we also fill with zero.
 however, when the analyzer doesn't return any terms for the field, we still 
 call similarity.computeNorm(0)... with the default similarity this encodes 
 infinity... -1
 in such a case, it doesnt really matter what the norm is, since it has no 
 terms. But its more efficient for e.g. compression if we consistently use 
 zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5978) don't write a norm of infinity when analyzer returns no tokens

2014-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5978?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153274#comment-14153274
 ] 

ASF subversion and git services commented on LUCENE-5978:
-

Commit 1628468 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1628468 ]

LUCENE-5978: don't write a norm of infinity when analyzer returns no tokens

 don't write a norm of infinity when analyzer returns no tokens
 --

 Key: LUCENE-5978
 URL: https://issues.apache.org/jira/browse/LUCENE-5978
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5978.patch


 When a document doesn't have the field, we fill with zero. when a segment 
 doesn't have the field, we also fill with zero.
 however, when the analyzer doesn't return any terms for the field, we still 
 call similarity.computeNorm(0)... with the default similarity this encodes 
 infinity... -1
 in such a case, it doesnt really matter what the norm is, since it has no 
 terms. But its more efficient for e.g. compression if we consistently use 
 zero.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 635 - Still Failing

2014-09-30 Thread Shai Erera
I've tried to change the test to assert better that the facet counts are
sampled OK. E.g. the test currently computes a STDDEV, but that's wrong
since we have only 5 categories that are sampled, and so it's not a real
normal distribution, and every difference from the mean results in a very
high STDDEV... So I tried Pearson's Chi-Squared test (
http://en.wikipedia.org/wiki/Pearson%27s_chi-squared_test), which works
better on more runs, but still fails from time to time.

At that point I started thinking what exactly are we testing -- we have a
Random sampler which samples DOCUMENTS not CATEGORIES. So e.g. if I index
documents w/ categories in that order: A B C D A B C D A B C D and so
forth, and I sample every 4th DOCUMENT, I could very well count only
category D, while A, B and C might see none to very low counts, resulting
in big STDDEV, or Chi-Square result.

The sampler has no knowledge of the indexed categories, only the indexed
documents. And you could easily write an adversary indexer which indexes
categories such that the random sampler almost never samples some
categories.

Instead I think we should assert that the sampler sampled roughly e.g. 10%
of the docs, irrespective of the counts of the categories. What do you
think?

On Tue, Sep 23, 2014 at 9:47 PM, Shai Erera ser...@gmail.com wrote:

 This is a test bug, but I'm not yet sure how to fix it. The test verifies
 the sampling works OK by computing some statistics about the counted
 facets. In particular it computes the standard deviation and ensures that
 it's smaller than some arbitrary value (200). However, with this seed and
 test parameters, the standard deviation is 215, and I've verified that with
 any seed, if you fix the number of indexed documents to a high enough
 number (50,000), it will likely be bigger than 200.

 What I'm not sure about is how to fix the test -- increasing the number
 from 200 to 300 will only push the limit further until another failure,
 because of other test parameters. I can do that, and investigate again if
 another run fails.

 But increasing that number too high misses the point I think, since if our
 random sampling isn't really random, we'll fail to detect that.

 Basically, with some very bad luck, we could sample such that we hit the
 maximum value of the variance, and therefore no matter the value we'll
 compare the standard deviation to, we might run into this extremely
 bad-luck-case at some point.

 One choice is to increase the value now, and accept that some runs may
 fail, once in a long while ...

 Do we have other tests that do random sampling of stuff and assert the
 sampled values?

 Shai

 On Mon, Sep 22, 2014 at 10:19 PM, Apache Jenkins Server 
 jenk...@builds.apache.org wrote:

 Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/635/

 1 tests failed.
 REGRESSION:
 org.apache.lucene.facet.TestRandomSamplingFacetsCollector.testRandomSampling

 Error Message:


 Stack Trace:
 java.lang.AssertionError
 at
 __randomizedtesting.SeedInfo.seed([EB7A704156A4175F:162195CE7F3E0E8]:0)
 at org.junit.Assert.fail(Assert.java:92)
 at org.junit.Assert.assertTrue(Assert.java:43)
 at org.junit.Assert.assertTrue(Assert.java:54)
 at
 org.apache.lucene.facet.TestRandomSamplingFacetsCollector.testRandomSampling(TestRandomSamplingFacetsCollector.java:136)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
 at
 com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
 at
 org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
 at
 org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
 at
 com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
 at
 org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
 at
 org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
 at
 org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
 at
 com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
 at
 com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
 

[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 643 - Still Failing

2014-09-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/643/

2 tests failed.
REGRESSION:  
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.testDistribSearch

Error Message:
no exception matching expected: 400: Request took too long during query 
expansion. Terminating request.

Stack Trace:
java.lang.AssertionError: no exception matching expected: 400: Request took too 
long during query expansion. Terminating request.
at 
__randomizedtesting.SeedInfo.seed([6E87D73A61D675B3:EF6159221689158F]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertFail(CloudExitableDirectoryReaderTest.java:101)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:81)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTest(CloudExitableDirectoryReaderTest.java:54)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Updated] (SOLR-6476) Create a bulk mode for schema API

2014-09-30 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-6476:
-
Attachment: SOLR-6476.patch

 Create a bulk mode for schema API
 -

 Key: SOLR-6476
 URL: https://issues.apache.org/jira/browse/SOLR-6476
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: managedResource
 Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
 SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
 SOLR-6476.patch


 The current schema API does one operation at a time and the normal usecase is 
 that users add multiple fields/fieldtypes/copyFields etc in one shot.
 example 
 {code:javascript}
 curl http://localhost:8983/solr/collection1/schema -H 
 'Content-type:application/json'  -d '{
 add-field: {
 name:sell-by,
 type:tdate,
 stored:true
 },
 add-field:{
 name:catchall,
 type:text_general,
 stored:false
 }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6476) Create a bulk mode for schema API

2014-09-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153356#comment-14153356
 ] 

Noble Paul commented on SOLR-6476:
--

 SOLR-6249 features applied.

Added javadocs and added complex field types in tests

bq.Order is not important in schema.xml, and in plenty of other contexts. This 
order dependence will need to be explicitly documented.

This will have to be documented in the API documentation in reference guide

 Create a bulk mode for schema API
 -

 Key: SOLR-6476
 URL: https://issues.apache.org/jira/browse/SOLR-6476
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: managedResource
 Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
 SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
 SOLR-6476.patch


 The current schema API does one operation at a time and the normal usecase is 
 that users add multiple fields/fieldtypes/copyFields etc in one shot.
 example 
 {code:javascript}
 curl http://localhost:8983/solr/collection1/schema -H 
 'Content-type:application/json'  -d '{
 add-field: {
 name:sell-by,
 type:tdate,
 stored:true
 },
 add-field:{
 name:catchall,
 type:text_general,
 stored:false
 }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1627410 - /lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py

2014-09-30 Thread Chris Hostetter

I'm confused: why are we telling this link checker that it is ok to have 
whitespace in the middle of a URL?


: Date: Wed, 24 Sep 2014 20:18:31 -
: From: rjer...@apache.org
: Reply-To: dev@lucene.apache.org
: To: comm...@lucene.apache.org
: Subject: svn commit: r1627410 -
: /lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py
: 
: Author: rjernst
: Date: Wed Sep 24 20:18:31 2014
: New Revision: 1627410
: 
: URL: http://svn.apache.org/r1627410
: Log:
: Account for whitespace in the middle of links when checking javadoc links
: 
: Modified:
: lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py
: 
: Modified: lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py
: URL: 
http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py?rev=1627410r1=1627409r2=1627410view=diff
: ==
: --- lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py (original)
: +++ lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py Wed Sep 24 
20:18:31 2014
: @@ -177,6 +177,9 @@ def checkAll(dirName):
:else:
:  anchor = None
:  
: +  # remove any whitespace from the middle of the link
: +  link = ''.join(link.split())
: +
:idx = link.find('?')
:if idx != -1:
:  link = link[:idx]
: 
: 
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6460) Keep transaction logs around longer

2014-09-30 Thread Renaud Delbru (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renaud Delbru updated SOLR-6460:

Attachment: SOLR-6460.patch

A new patch that introduces efficient seeking over a list of transaction log 
files. Efficient seeking is achieved by adding metadata (version number) to 
tlog filename and by leveraging the filesystem's index. The tlog filename has 
now the following syntax: tlog.${logId}.${startVersion}.




 Keep transaction logs around longer
 ---

 Key: SOLR-6460
 URL: https://issues.apache.org/jira/browse/SOLR-6460
 Project: Solr
  Issue Type: Sub-task
Reporter: Yonik Seeley
 Attachments: SOLR-6460.patch, SOLR-6460.patch


 Transaction logs are currently deleted relatively quickly... but we need to 
 keep them around much longer to be used as a source for cross-datacenter 
 recovery.  This will also be useful in the future for enabling peer-sync to 
 use more historical updates before falling back to replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6460) Keep transaction logs around longer

2014-09-30 Thread Renaud Delbru (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153430#comment-14153430
 ] 

Renaud Delbru edited comment on SOLR-6460 at 9/30/14 5:26 PM:
--

A new patch that introduces efficient seeking over a list of transaction log 
files. Efficient seeking is achieved by adding metadata (version number) to 
tlog filename and by leveraging the filesystem's index. The tlog filename has 
now the following syntax:
{noformat}
tlog.${logId}.${startVersion}
{noformat}




was (Author: rendel):
A new patch that introduces efficient seeking over a list of transaction log 
files. Efficient seeking is achieved by adding metadata (version number) to 
tlog filename and by leveraging the filesystem's index. The tlog filename has 
now the following syntax: tlog.${logId}.${startVersion}.




 Keep transaction logs around longer
 ---

 Key: SOLR-6460
 URL: https://issues.apache.org/jira/browse/SOLR-6460
 Project: Solr
  Issue Type: Sub-task
Reporter: Yonik Seeley
 Attachments: SOLR-6460.patch, SOLR-6460.patch


 Transaction logs are currently deleted relatively quickly... but we need to 
 keep them around much longer to be used as a source for cross-datacenter 
 recovery.  This will also be useful in the future for enabling peer-sync to 
 use more historical updates before falling back to replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6476) Create a bulk mode for schema API

2014-09-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153438#comment-14153438
 ] 

Steve Rowe commented on SOLR-6476:
--

Thanks for the changes, Noble, those look fine - two new issues:

# Didn't [~markrmil...@gmail.com] convert all timing stuff in Solr to use 
{{nanoTime()}} instead of {{System.getCurrentTimeMillis()}}?  If so, shouldn't 
we use {{nanoTime()}} here too?  (This applies to [~thelabdude]'s SOLR-6249 
work as well.)
# In {{SchemaManager.waitForOtherReplicasToUpdate()}}, called from 
{{doOperations()}}, you send {{-1}} in as {{maxWaitSecs}} to 
{{ManagedIndexSchema.waitForSchemaZkVersionAgreement()}} when the timeout has 
been exceeded, but AFAICT negative values aren't handled appropriately there, 
e.g. it gets sent in unexamined to {{ExecutorService.invokeAll()}}:

{code:java}
  private ListString doOperations(ListOperation operations){
int timeout = req.getParams().getInt(BaseSolrResource.UPDATE_TIMEOUT_SECS, 
-1);
long startTime = System.currentTimeMillis();
[...]
managedIndexSchema.persistManagedSchema(false);
core.setLatestSchema(managedIndexSchema);
waitForOtherReplicasToUpdate(timeout, startTime);
[...]
  }

  private void waitForOtherReplicasToUpdate(int timeout, long startTime) {
if(timeout  0  [...]){
[...]
ManagedIndexSchema.waitForSchemaZkVersionAgreement([...],
getTimeLeftInSecs(timeout, startTime));
  }
}
  }
  private int getTimeLeftInSecs(int timeout, long startTime) {
long timeLeftSecs = timeout -  ((System.currentTimeMillis() - startTime) 
/1000);
return (int) (timeLeftSecs  0 ?timeLeftSecs: -1);
  }
{code}


 Create a bulk mode for schema API
 -

 Key: SOLR-6476
 URL: https://issues.apache.org/jira/browse/SOLR-6476
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: managedResource
 Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
 SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
 SOLR-6476.patch


 The current schema API does one operation at a time and the normal usecase is 
 that users add multiple fields/fieldtypes/copyFields etc in one shot.
 example 
 {code:javascript}
 curl http://localhost:8983/solr/collection1/schema -H 
 'Content-type:application/json'  -d '{
 add-field: {
 name:sell-by,
 type:tdate,
 stored:true
 },
 add-field:{
 name:catchall,
 type:text_general,
 stored:false
 }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6476) Create a bulk mode for schema API

2014-09-30 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153438#comment-14153438
 ] 

Steve Rowe edited comment on SOLR-6476 at 9/30/14 5:50 PM:
---

Thanks for the changes, Noble, those look fine - two new issues:

(*edit*: {{System.getCurrentTimeMillis()}} - {{currentTimeMillis()}})

# Didn't [~markrmil...@gmail.com] convert all timing stuff in Solr to use 
{{nanoTime()}} instead of {{currentTimeMillis()}}?  If so, shouldn't we use 
{{nanoTime()}} here too?  (This applies to [~thelabdude]'s SOLR-6249 work as 
well.)
# In {{SchemaManager.waitForOtherReplicasToUpdate()}}, called from 
{{doOperations()}}, you send {{-1}} in as {{maxWaitSecs}} to 
{{ManagedIndexSchema.waitForSchemaZkVersionAgreement()}} when the timeout has 
been exceeded, but AFAICT negative values aren't handled appropriately there, 
e.g. it gets sent in unexamined to {{ExecutorService.invokeAll()}}:

{code:java}
  private ListString doOperations(ListOperation operations){
int timeout = req.getParams().getInt(BaseSolrResource.UPDATE_TIMEOUT_SECS, 
-1);
long startTime = System.currentTimeMillis();
[...]
managedIndexSchema.persistManagedSchema(false);
core.setLatestSchema(managedIndexSchema);
waitForOtherReplicasToUpdate(timeout, startTime);
[...]
  }

  private void waitForOtherReplicasToUpdate(int timeout, long startTime) {
if(timeout  0  [...]){
[...]
ManagedIndexSchema.waitForSchemaZkVersionAgreement([...],
getTimeLeftInSecs(timeout, startTime));
  }
}
  }
  private int getTimeLeftInSecs(int timeout, long startTime) {
long timeLeftSecs = timeout -  ((System.currentTimeMillis() - startTime) 
/1000);
return (int) (timeLeftSecs  0 ?timeLeftSecs: -1);
  }
{code}



was (Author: steve_rowe):
Thanks for the changes, Noble, those look fine - two new issues:

# Didn't [~markrmil...@gmail.com] convert all timing stuff in Solr to use 
{{nanoTime()}} instead of {{System.getCurrentTimeMillis()}}?  If so, shouldn't 
we use {{nanoTime()}} here too?  (This applies to [~thelabdude]'s SOLR-6249 
work as well.)
# In {{SchemaManager.waitForOtherReplicasToUpdate()}}, called from 
{{doOperations()}}, you send {{-1}} in as {{maxWaitSecs}} to 
{{ManagedIndexSchema.waitForSchemaZkVersionAgreement()}} when the timeout has 
been exceeded, but AFAICT negative values aren't handled appropriately there, 
e.g. it gets sent in unexamined to {{ExecutorService.invokeAll()}}:

{code:java}
  private ListString doOperations(ListOperation operations){
int timeout = req.getParams().getInt(BaseSolrResource.UPDATE_TIMEOUT_SECS, 
-1);
long startTime = System.currentTimeMillis();
[...]
managedIndexSchema.persistManagedSchema(false);
core.setLatestSchema(managedIndexSchema);
waitForOtherReplicasToUpdate(timeout, startTime);
[...]
  }

  private void waitForOtherReplicasToUpdate(int timeout, long startTime) {
if(timeout  0  [...]){
[...]
ManagedIndexSchema.waitForSchemaZkVersionAgreement([...],
getTimeLeftInSecs(timeout, startTime));
  }
}
  }
  private int getTimeLeftInSecs(int timeout, long startTime) {
long timeLeftSecs = timeout -  ((System.currentTimeMillis() - startTime) 
/1000);
return (int) (timeLeftSecs  0 ?timeLeftSecs: -1);
  }
{code}


 Create a bulk mode for schema API
 -

 Key: SOLR-6476
 URL: https://issues.apache.org/jira/browse/SOLR-6476
 Project: Solr
  Issue Type: Bug
  Components: Schema and Analysis
Reporter: Noble Paul
Assignee: Noble Paul
  Labels: managedResource
 Attachments: SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
 SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, SOLR-6476.patch, 
 SOLR-6476.patch


 The current schema API does one operation at a time and the normal usecase is 
 that users add multiple fields/fieldtypes/copyFields etc in one shot.
 example 
 {code:javascript}
 curl http://localhost:8983/solr/collection1/schema -H 
 'Content-type:application/json'  -d '{
 add-field: {
 name:sell-by,
 type:tdate,
 stored:true
 },
 add-field:{
 name:catchall,
 type:text_general,
 stored:false
 }
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6480) Too Many Open files trying to ask a replica to recover

2014-09-30 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6480:


Assignee: Timothy Potter

 Too Many Open files trying to ask a replica to recover
 --

 Key: SOLR-6480
 URL: https://issues.apache.org/jira/browse/SOLR-6480
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7.2
Reporter: Ugo Matrangolo
Assignee: Timothy Potter
 Attachments: SOLR-6480.patch


 After the DistributedUpdateProcessor tries to ask multiple times a replica to 
 recover it eventually starts to fail with the following error:
 {code}
 2014-08-28 22:42:46,285 [updateExecutor-1-thread-2334] ERROR 
 org.apache.solr.update.processor.DistributedUpdateProcessor  - 
 http://10.140.4.246:9765: Could not tell a replica to 
 recover:org.apache.solr.client.solrj.SolrServerException: IOException occured 
 when talking to server at: http://10.140.4.246:9765
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:507)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:199)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor$1.run(DistributedUpdateProcessor.java:685)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.net.SocketException: Too many open files
 at java.net.Socket.createImpl(Socket.java:397)
 at java.net.Socket.getImpl(Socket.java:460)
 at java.net.Socket.setSoTimeout(Socket.java:1017)
 at 
 org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:116)
 at 
 org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:178)
 at 
 org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:610)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:445)
 at 
 org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:395)
 ... 5 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6480) Too Many Open files trying to ask a replica to recover

2014-09-30 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153479#comment-14153479
 ] 

Timothy Potter commented on SOLR-6480:
--

Thanks for the patch Ugo! I've been working in the DistributedUpdateProcessor 
lately and will review and help to get this committed.

 Too Many Open files trying to ask a replica to recover
 --

 Key: SOLR-6480
 URL: https://issues.apache.org/jira/browse/SOLR-6480
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7.2
Reporter: Ugo Matrangolo
Assignee: Timothy Potter
 Attachments: SOLR-6480.patch


 After the DistributedUpdateProcessor tries to ask multiple times a replica to 
 recover it eventually starts to fail with the following error:
 {code}
 2014-08-28 22:42:46,285 [updateExecutor-1-thread-2334] ERROR 
 org.apache.solr.update.processor.DistributedUpdateProcessor  - 
 http://10.140.4.246:9765: Could not tell a replica to 
 recover:org.apache.solr.client.solrj.SolrServerException: IOException occured 
 when talking to server at: http://10.140.4.246:9765
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:507)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:199)
 at 
 org.apache.solr.update.processor.DistributedUpdateProcessor$1.run(DistributedUpdateProcessor.java:685)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 Caused by: java.net.SocketException: Too many open files
 at java.net.Socket.createImpl(Socket.java:397)
 at java.net.Socket.getImpl(Socket.java:460)
 at java.net.Socket.setSoTimeout(Socket.java:1017)
 at 
 org.apache.http.conn.scheme.PlainSocketFactory.connectSocket(PlainSocketFactory.java:116)
 at 
 org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:178)
 at 
 org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:610)
 at 
 org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:445)
 at 
 org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
 at 
 org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
 at 
 org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:395)
 ... 5 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.7.0_67) - Build # 11357 - Failure!

2014-09-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11357/
Java: 64bit/jdk1.7.0_67 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.testDistribSearch

Error Message:
no exception matching expected: 400: Request took too long during query 
expansion. Terminating request.

Stack Trace:
java.lang.AssertionError: no exception matching expected: 400: Request took too 
long during query expansion. Terminating request.
at 
__randomizedtesting.SeedInfo.seed([3B4750BB9A329A2:8252FB13CEFC499E]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertFail(CloudExitableDirectoryReaderTest.java:101)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:75)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTest(CloudExitableDirectoryReaderTest.java:54)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
   

Re: svn commit: r1627410 - /lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py

2014-09-30 Thread Ryan Ernst
Because browsers don't care? The link which caused the jenkins failure with
this worked just fine when I tested it manually.

On Tue, Sep 30, 2014 at 10:18 AM, Chris Hostetter hossman_luc...@fucit.org
wrote:


 I'm confused: why are we telling this link checker that it is ok to have
 whitespace in the middle of a URL?


 : Date: Wed, 24 Sep 2014 20:18:31 -
 : From: rjer...@apache.org
 : Reply-To: dev@lucene.apache.org
 : To: comm...@lucene.apache.org
 : Subject: svn commit: r1627410 -
 : /lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py
 :
 : Author: rjernst
 : Date: Wed Sep 24 20:18:31 2014
 : New Revision: 1627410
 :
 : URL: http://svn.apache.org/r1627410
 : Log:
 : Account for whitespace in the middle of links when checking javadoc links
 :
 : Modified:
 : lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py
 :
 : Modified: lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py
 : URL:
 http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py?rev=1627410r1=1627409r2=1627410view=diff
 :
 ==
 : --- lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py (original)
 : +++ lucene/dev/trunk/dev-tools/scripts/checkJavadocLinks.py Wed Sep 24
 20:18:31 2014
 : @@ -177,6 +177,9 @@ def checkAll(dirName):
 :else:
 :  anchor = None
 :
 : +  # remove any whitespace from the middle of the link
 : +  link = ''.join(link.split())
 : +
 :idx = link.find('?')
 :if idx != -1:
 :  link = link[:idx]
 :
 :
 :

 -Hoss
 http://www.lucidworks.com/

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Updated] (SOLR-6513) Add a collectionsAPI call BALANCESLICEUNIQUE

2014-09-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-6513:
-
Description: 
Another sub-task for SOLR-6491. The ability to assign a property on a 
node-by-node basis is nice, but tedious to get right for a sysadmin, especially 
if there are, say, 100s of nodes hosting a system. This JIRA would essentially 
provide an automatic mechanism for assigning a property. This particular 
command simply changes the cluster state, it doesn't do anything like re-assign 
functions.

My idea for this version is fairly limited. You'd have to specify a collection 
and there would be no attempt to, say, evenly distribute the preferred leader 
role/property for this collection by looking at _other_ collections. Or by 
looking at underlying hardware capabilities. Or

It would be a pretty simple round-robin assignment. About the only intelligence 
built in would be to change as few roles/properties as possible. Let's say that 
the correct number of nodes for this role turned out to be 3. Any node 
currently having 3 properties for this collection would NOT be changed. Any 
node having 2 properties would have one added that would be taken from some 
node with  3 properties like this.

This probably needs an optional parameter, something like 
includeInactiveNodes=true|false

Since this is an arbitrary property, one must specify sliceUnique=true. So for 
the preferredLeader functionality, one would specify something like:
action=BALANCESLICEUNIQUEproperty=preferredLeaderproprety.value=true.

There are checks in this code that require the preferredLeader to have a t/f 
value and require that sliceUnique bet true. That said, this can be called on 
an arbitrary property that has only one such property per slice.

  was:
Another sub-task for SOLR-6491. The ability to assign a preferred leader on a 
node-by-node basis is nice, but tedious to get right for a sysadmin, especially 
if there are, say, 100s of nodes hosting a system. This JIRA would essentially 
provide an automatic mechanism for assigning these roles (or properties). This 
particular command would NOT re-elect leaders, just change the flag in the 
clusterstate.

My idea for this version is fairly limited. You'd have to specify a collection 
and there would be no attempt to, say, evenly distribute the preferred leader 
role/property for this collection by looking at _other_ collections. Or by 
looking at underlying hardware capabilities. Or

It would be a pretty simple round-robin assignment. About the only intelligence 
built in would be to change as few roles/properties as possible. Let's say that 
the correct number of nodes for this role turned out to be 3. Any node 
currently having 3 preferred leaders for this collection would NOT be changed. 
Any node having 2 preferred leaders would have one added that would be taken 
from some node with  3 preferred leaders.

This probably needs an optional parameter, something like 
includeInactiveNodes=true|false

Summary: Add a collectionsAPI call BALANCESLICEUNIQUE  (was: Add a 
collectionsAPI call ASSIGNPREFERREDLEADERS)

 Add a collectionsAPI call BALANCESLICEUNIQUE
 

 Key: SOLR-6513
 URL: https://issues.apache.org/jira/browse/SOLR-6513
 Project: Solr
  Issue Type: Improvement
Reporter: Erick Erickson
Assignee: Erick Erickson
 Attachments: SOLR-6513.patch


 Another sub-task for SOLR-6491. The ability to assign a property on a 
 node-by-node basis is nice, but tedious to get right for a sysadmin, 
 especially if there are, say, 100s of nodes hosting a system. This JIRA would 
 essentially provide an automatic mechanism for assigning a property. This 
 particular command simply changes the cluster state, it doesn't do anything 
 like re-assign functions.
 My idea for this version is fairly limited. You'd have to specify a 
 collection and there would be no attempt to, say, evenly distribute the 
 preferred leader role/property for this collection by looking at _other_ 
 collections. Or by looking at underlying hardware capabilities. Or
 It would be a pretty simple round-robin assignment. About the only 
 intelligence built in would be to change as few roles/properties as possible. 
 Let's say that the correct number of nodes for this role turned out to be 3. 
 Any node currently having 3 properties for this collection would NOT be 
 changed. Any node having 2 properties would have one added that would be 
 taken from some node with  3 properties like this.
 This probably needs an optional parameter, something like 
 includeInactiveNodes=true|false
 Since this is an arbitrary property, one must specify sliceUnique=true. So 
 for the preferredLeader functionality, one would specify something like:
 

[jira] [Created] (SOLR-6574) new ValueSources parser syntax for coercing the datatypes used in wrapped ValueSources

2014-09-30 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6574:
--

 Summary: new ValueSources  parser syntax for coercing the 
datatypes used in wrapped ValueSources
 Key: SOLR-6574
 URL: https://issues.apache.org/jira/browse/SOLR-6574
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man



Something i've been thinking about for a while, but SOLR-6562 recently goaded 
me into opening a jira for...

we could/should add ValueSourceParses for coercing the datatypes of the 
ValueSources that they wrap, as a way of contorlling wether we ultimately call 
FunctionValues.floatValue(docid) vs intValue(docid) vs longValue(docid) etc...

so while sum(field1, field2) currently does float based math on the two 
fields, we could use int(sum(field1, field2)) which would create some new 
CoerceIntValueSource that would wrap the existing SumValueSource, and every 
type specific method in CoerceIntValueSource's FunctionValues would delegate to 
SumValueSource's intValue method -- and likewise: CoerceIntValueSource's 
objectVal() method would return an Integer wrapped arround the results of the 
intValue(docid).


(FWIW: i think a bunch of the existing math based FunctionValues currently 
implement most of their methods like intValue/longValue/doubleValue/etc... by 
just delegating to floatValue -- so for this to work properly that would have 
to be fixed as well, but those fixes can  should be tracked in their own jiras)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6575) ValueSources/FunctionValues should be able to (dynamically) indicate their prefered data type (propogating up)

2014-09-30 Thread Hoss Man (JIRA)
Hoss Man created SOLR-6575:
--

 Summary: ValueSources/FunctionValues should be able to 
(dynamically) indicate their prefered data type (propogating up)
 Key: SOLR-6575
 URL: https://issues.apache.org/jira/browse/SOLR-6575
 Project: Solr
  Issue Type: Improvement
Reporter: Hoss Man



Something i've been thinking about for a while, but SOLR-6562 recently goaded 
me into opening a jira for...

The ValueSource/FunctionValues API is designed to work with different levels of 
math precision (int, long, float, double, date, etc...) and the 
FunctionValues.objectVal() method provides a generic way to fetch an arbitrary 
type from any FunctionValues instance -- which can be in the preferred type 
for a given ValueSource can be retrieved (ie: an Integer if the ValueSource 
corrisponds to the DocValues of an int field).

But for ValueSources thta wrap other value sources (ie: implementing math 
functions like sum or product there is no easy way at runtime to know which 
of the underlying methods on the FunctionValues is the best one to call.  It 
would be helpful if FunctionValues or ValueSource had some type of method on it 
(ie: canonicalDataType() that could return some enumeration value inidacting 
which of the low level various methods (intValue(docid), floatValue(docid), 
etc...) were best suited for the data it represents.

Straw man idea...

For the lowest level ValueTypes coming from DocValues, these methods could 
return a constant -- but for things like SumValueSource canonicalDataType() 
could be recursive -- returning the least common denominator of the 
ValueSources it wraps. the corrisponding intValue() and floatValue() methods in 
that class could then cast appopriately.  

So even if you have SumValueSource wrapped arround several IntDocValuesSource, 
SumValueSource.canonicalDataType() would return INT and if you called 
SumValueSource's FunctionValues.intValue(docid) it would add up the results of 
the intValues() methods on all of the wrapped FunctionValues -- but 
floatValues(docid) would/could still add up the results of the 
floatValue(docid) results from all of the wrapped FunctionValues (for people 
who want to coerce float based math -- ie: SOLR-6574)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6562) Function query calculates the wrong value

2014-09-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153532#comment-14153532
 ] 

Hoss Man commented on SOLR-6562:



This is currently working as designed -- not a bug, but i've opened some 
realted improvement issues i've had in the back of my mind and your report 
reminded me.

the ValueSource implementations (the internals of function queries) support 
math operations on diff data types (int, long, float, double, etc...) 
corrisponding to the lowest level FieldType support in lucene DocValues (the 
same API as used by the FieldCache).

However: at present, there is no general purpose way to indicate which datatype 
you'd like to see used when doing math operations (neither from a bottom up 
the source data is ints, so do int math or a top down i ultimately want a 
long, so do long math) standpoint.  

Since the primary purpose of function queries is to be used in boosting query 
scores, which are alreayd floating point -- that's what gets used at the moment.

bq. I'm using function queries to calculate dates. For example to add some 
hours to a date.

The ms() function is specifically designed to coerce millisecond based (long) 
math when subtraction on date fields like you are attempting in your original 
examples.  when combining the results of ms() inside of a sm() function that 
will still be done using floating point math by default however



 Function query calculates the wrong value
 -

 Key: SOLR-6562
 URL: https://issues.apache.org/jira/browse/SOLR-6562
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Stefan Neumüller
Priority: Critical

 This calculation 
 fl=sub(sum(abs(sub(1416906516710,141678360)),abs(sub(1036800,1416906516710))),10226321640)
  should return 0. But the calculated value is 8388608



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6562) Function query calculates the wrong value

2014-09-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6562?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-6562.

Resolution: Invalid

resolving invalid since this is function as designed -- see linked issues for 
future improvement.

 Function query calculates the wrong value
 -

 Key: SOLR-6562
 URL: https://issues.apache.org/jira/browse/SOLR-6562
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.9
Reporter: Stefan Neumüller
Priority: Critical

 This calculation 
 fl=sub(sum(abs(sub(1416906516710,141678360)),abs(sub(1036800,1416906516710))),10226321640)
  should return 0. But the calculated value is 8388608



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6490) ValueSourceParser function max does not handle dates.

2014-09-30 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6490?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-6490.

Resolution: Invalid

resolving as invalid since this is currently working as designed -- but see 
linked issues for discussion of future improvements to give the user more 
control over this sort of thing.

in your specific case, something like this might work better...

sort=max(ms(date1_field_tdt), ms(date2_field_tdt))

and/or if there is a better baseline date (other then the unix epoch) you 
might want to try the 2 arg form of the ms() function.

 ValueSourceParser function max does not handle dates.
 ---

 Key: SOLR-6490
 URL: https://issues.apache.org/jira/browse/SOLR-6490
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Reporter: Aaron McMillin
Priority: Minor

 As a user
 when trying to use sort=max(date1_field_tdt, date2_field_tdt)
 I expect documents to be returned in order
 Currently this is not the case. Dates are stored as Long, but max uses 
 MaxFloatFunction which casts them to Floats thereby losing precision.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4212) Support for facet pivot query for filtered count

2014-09-30 Thread Steve Molloy (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Molloy updated SOLR-4212:
---
Attachment: SOLR-4212-multiple-q.patch

Add test and build map of parsed queries once

 Support for facet pivot query for filtered count
 

 Key: SOLR-4212
 URL: https://issues.apache.org/jira/browse/SOLR-4212
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 4.0
Reporter: Steve Molloy
 Fix For: 4.9, Trunk

 Attachments: SOLR-4212-multiple-q.patch, SOLR-4212-multiple-q.patch, 
 SOLR-4212.patch, SOLR-4212.patch, SOLR-4212.patch, patch-4212.txt


 Facet pivot provide hierarchical support for computing data used to populate 
 a treemap or similar visualization. TreeMaps usually offer users extra 
 information by applying an overlay color on top of the existing square sizes 
 based on hierarchical counts. This second count is based on user choices, 
 representing, usually with gradient, the proportion of the square that fits 
 the user's choices.
 The proposition is to add a facet.pivot.q parameter that would allow to 
 specify one or more queries (per field) that would be intersected with DocSet 
 used to calculate pivot count, stored in separate qcounts list, each entry 
 keyed by the query.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #719: POMs out of sync

2014-09-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/719/

4 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
Error CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create 
core [halfcollection_shard1_replica1] Caused by: Could not get shard id for 
core: halfcollection_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Error 
CREATEing SolrCore 'halfcollection_shard1_replica1': Unable to create core 
[halfcollection_shard1_replica1] Caused by: Could not get shard id for core: 
halfcollection_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([28F080AEF013FF18:A9160EB6874C9F24]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:568)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)


REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[http://127.0.0.1:61534, 
http://127.0.0.1:61525, http://127.0.0.1:61531, http://127.0.0.1:61520, 
http://127.0.0.1:61528]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:61534, http://127.0.0.1:61525, 
http://127.0.0.1:61531, http://127.0.0.1:61520, http://127.0.0.1:61528]
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:568)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.doRequest(LBHttpSolrServer.java:343)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:304)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:880)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:658)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:601)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:171)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:144)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:88)


REGRESSION:  org.apache.solr.core.ExitableDirectoryReaderTest.testPrefixQuery

Error Message:
null

Stack Trace:
java.lang.AssertionError: 
at 
__randomizedtesting.SeedInfo.seed([E81A07C7AA63E0DE:5B4D3385CEBE045B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.SolrTestCaseJ4.assertQEx(SolrTestCaseJ4.java:850)
at 
org.apache.solr.core.ExitableDirectoryReaderTest.testPrefixQuery(ExitableDirectoryReaderTest.java:67)


REGRESSION:  
org.apache.solr.core.ExitableDirectoryReaderTest.testQueriesOnDocsWithMultipleTerms

Error Message:
null

Stack Trace:
java.lang.AssertionError: 
at 
__randomizedtesting.SeedInfo.seed([E81A07C7AA63E0DE:86DA341686169BF1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.SolrTestCaseJ4.assertQEx(SolrTestCaseJ4.java:850)
at 
org.apache.solr.core.ExitableDirectoryReaderTest.testQueriesOnDocsWithMultipleTerms(ExitableDirectoryReaderTest.java:88)




Build Log:
[...truncated 53058 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:547: 
The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:199: 
The following error occurred while executing this line:
: Java returned: 1

Total time: 236 minutes 25 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-6517) CollectionsAPI call ELECTPREFERREDLEADERS

2014-09-30 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-6517:
-
Summary: CollectionsAPI call ELECTPREFERREDLEADERS  (was: CollectionsAPI 
call REELECTLEADERS)

 CollectionsAPI call ELECTPREFERREDLEADERS
 -

 Key: SOLR-6517
 URL: https://issues.apache.org/jira/browse/SOLR-6517
 Project: Solr
  Issue Type: New Feature
Affects Versions: 5.0, Trunk
Reporter: Erick Erickson
Assignee: Erick Erickson

 Perhaps the final piece of SOLR-6491. Once the preferred leadership roles are 
 assigned, there has to be a command make it so Mr. Solr. This is something 
 of a placeholder to collect ideas. One wouldn't want to flood the system with 
 hundreds of re-assignments at once. Should this be synchronous or asnych? 
 Should it make the best attempt but not worry about perfection? Should it???
 a collection=name parameter would be required and it would re-elect all the 
 leaders that were on the 'wrong' node
 I'm thinking an optionally allowing one to specify a shard in the case where 
 you wanted to make a very specific change. Note that there's no need to 
 specify a particular replica, since there should be only a single 
 preferredLeader per slice.
 This command would do nothing to any slice that did not have a replica with a 
 preferredLeader role. Likewise it would do nothing if the slice in question 
 already had the leader role assigned to the node with the preferredLeader 
 role.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6517) CollectionsAPI call ELECTPREFERREDLEADERS

2014-09-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14153718#comment-14153718
 ] 

Erick Erickson commented on SOLR-6517:
--

[~markrmil...@gmail.com] [~noble.paul] [~shalinmangar] I know you guys have 
been in this code a LOT, I'd appreciate any comments before I get too far into 
mucking around in code that's
a complex and
b easy to mess up 

I thought I'd ask if this has any possibility of working. I'll be pursuing this 
no matter what, and if the conclusion is it's a horrible idea I'll chalk it up 
to a learning experience ;).

All I'm looking for here is whether this seems like a reasonable approach, I'm 
digging into details of how to make it happen.

Assuming SOLR-6512 (assign property to a replica, preferredLeader in this case) 
and SOLR-6513 (distribute a unique property for one replica for each shard in a 
collection evenly across all the nodes hosting any replicas for that 
collection) are committed, I'm left with the leader-re-election problem. Each 
slice will have one (and only one) replica with a property 
preferredLeader:true.

When a node joins the election process (LeaderElector.joinElection), it could 
insert itself at the head of the list if preferredLeader==true. I'm looking at 
the LeaderElector class. There's the very interesting method: 
joinElection(ElectionContext context, boolean replacement,boolean joinAtHead). 

I'm particularly interested in the joinAtHead parameter, seems like it's 
exactly what I need. If this method were to look at the properties for the 
replica and join at head if the preferredLeader property was set, it seems 
ideal. It _also_ seems like the action of re-distributing the preferred leaders 
command becomes triggering the leader election process for all the replicas in 
the collection that are currently leaders but don't have the preferredLeader 
property set (and some other replica for that slice _does_). Essentially this 
is a Hey you, stop being the leader now call. The rest is automatic. I hope.

Of course I'll have to throttle the re-election process, don't want 50 leaders 
being reelected at once. How is TBD.

I've done some preliminary testing and this seems to fit the bill for my needs, 
I'll be working up a patch sometime Real Soon Now for your delectation unless 
anyone sees gaping holes in the approach.

One thing I did note  in joinElection (about line 252 on trunk). The string we  
create the new leader sequence from is:
String firstInLine = nodes.get(1);

Seems like it should be 
String firstInLine = nodes.get(0);

Problem is, say I have the sequence numbers 
node1-0
node2-1

and I do the joinElection bit with joinAtHead=true (which I don't think we ever 
do actually). Then I wind up  with
node1-0
node2-1
node3-1

I'll change it unless there's a good reason not to.


 CollectionsAPI call ELECTPREFERREDLEADERS
 -

 Key: SOLR-6517
 URL: https://issues.apache.org/jira/browse/SOLR-6517
 Project: Solr
  Issue Type: New Feature
Affects Versions: 5.0, Trunk
Reporter: Erick Erickson
Assignee: Erick Erickson

 Perhaps the final piece of SOLR-6491. Once the preferred leadership roles are 
 assigned, there has to be a command make it so Mr. Solr. This is something 
 of a placeholder to collect ideas. One wouldn't want to flood the system with 
 hundreds of re-assignments at once. Should this be synchronous or asnych? 
 Should it make the best attempt but not worry about perfection? Should it???
 a collection=name parameter would be required and it would re-elect all the 
 leaders that were on the 'wrong' node
 I'm thinking an optionally allowing one to specify a shard in the case where 
 you wanted to make a very specific change. Note that there's no need to 
 specify a particular replica, since there should be only a single 
 preferredLeader per slice.
 This command would do nothing to any slice that did not have a replica with a 
 preferredLeader role. Likewise it would do nothing if the slice in question 
 already had the leader role assigned to the node with the preferredLeader 
 role.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Getting non-default tests to pass

2014-09-30 Thread Shawn Heisey
I'm running tests on branch_5x to see how things look for release, and
turning on additional tests (nightly, weekly, monster) ... a bunch of
Test2B tests are failing, apparently because the heap is too small. 
I've been trying to figure out how to make it larger, haven't found it yet.

What options are required to make sure that the test JVMs have the
resources required for these tests to complete properly?

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5980) IW positions check not quite right

2014-09-30 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5980:
---

 Summary: IW positions check not quite right
 Key: LUCENE-5980
 URL: https://issues.apache.org/jira/browse/LUCENE-5980
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


I noticed this when working on LUCENE-5977. 

We only check that position doesn't overflow, not length. So a buggy analyzer 
can happily write a corrupt index (negative freq) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5980) IW positions check not quite right

2014-09-30 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5980:

Attachment: LUCENE-5980.patch

Patch with a simple check. The test is marked Nightly, it takes about 2 minutes.

 IW positions check not quite right
 --

 Key: LUCENE-5980
 URL: https://issues.apache.org/jira/browse/LUCENE-5980
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5980.patch


 I noticed this when working on LUCENE-5977. 
 We only check that position doesn't overflow, not length. So a buggy analyzer 
 can happily write a corrupt index (negative freq) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5980) IW positions check not quite right

2014-09-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5980?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154035#comment-14154035
 ] 

Michael McCandless commented on LUCENE-5980:


+1 LOL

 IW positions check not quite right
 --

 Key: LUCENE-5980
 URL: https://issues.apache.org/jira/browse/LUCENE-5980
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5980.patch


 I noticed this when working on LUCENE-5977. 
 We only check that position doesn't overflow, not length. So a buggy analyzer 
 can happily write a corrupt index (negative freq) 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5981) CheckIndex modifies index without write.lock

2014-09-30 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5981:
---

 Summary: CheckIndex modifies index without write.lock
 Key: LUCENE-5981
 URL: https://issues.apache.org/jira/browse/LUCENE-5981
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Instead it asks you nicely to not do that.

Due to the way this is implemented, if you choose to drop corrupt segments, it 
should obtain the lock before actually doing any reads too, or it might lose 
more than you think or do other strange stuff.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5924) Rename checkindex's -fix option

2014-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154067#comment-14154067
 ] 

ASF subversion and git services commented on LUCENE-5924:
-

Commit 1628579 from [~rcmuir] in branch 'dev/trunk'
[ https://svn.apache.org/r1628579 ]

LUCENE-5924: rename CheckIndex -fix option and add more warnings about what it 
actually does

 Rename checkindex's -fix option
 ---

 Key: LUCENE-5924
 URL: https://issues.apache.org/jira/browse/LUCENE-5924
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5924.patch


 This option is dangerous. It sounds so good though that people are quick to 
 use it, but it definitely drops entire segments.
 The commandline flag should be someting other than -fix (e.g. -exorcise).
 I dont agree with the current description of the option either. True, it does 
 have **WARNING** but I think it should read more like the scary options in 
 'man hdparm'.
 Like hdparm, we could fail if you provide it with an even more ridiculous 
 warning, and make you run again with --yes-i-really-know-what-i-am-doing as 
 well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5924) Rename checkindex's -fix option

2014-09-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154091#comment-14154091
 ] 

ASF subversion and git services commented on LUCENE-5924:
-

Commit 1628582 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1628582 ]

LUCENE-5924: rename CheckIndex -fix option and add more warnings about what it 
actually does

 Rename checkindex's -fix option
 ---

 Key: LUCENE-5924
 URL: https://issues.apache.org/jira/browse/LUCENE-5924
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5924.patch


 This option is dangerous. It sounds so good though that people are quick to 
 use it, but it definitely drops entire segments.
 The commandline flag should be someting other than -fix (e.g. -exorcise).
 I dont agree with the current description of the option either. True, it does 
 have **WARNING** but I think it should read more like the scary options in 
 'man hdparm'.
 Like hdparm, we could fail if you provide it with an even more ridiculous 
 warning, and make you run again with --yes-i-really-know-what-i-am-doing as 
 well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-5924) Rename checkindex's -fix option

2014-09-30 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-5924.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

 Rename checkindex's -fix option
 ---

 Key: LUCENE-5924
 URL: https://issues.apache.org/jira/browse/LUCENE-5924
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5924.patch


 This option is dangerous. It sounds so good though that people are quick to 
 use it, but it definitely drops entire segments.
 The commandline flag should be someting other than -fix (e.g. -exorcise).
 I dont agree with the current description of the option either. True, it does 
 have **WARNING** but I think it should read more like the scary options in 
 'man hdparm'.
 Like hdparm, we could fail if you provide it with an even more ridiculous 
 warning, and make you run again with --yes-i-really-know-what-i-am-doing as 
 well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6576) ModifiableSolrParams#add(SolrParams) is performing a set operation instead of an add

2014-09-30 Thread Steve Davids (JIRA)
Steve Davids created SOLR-6576:
--

 Summary: ModifiableSolrParams#add(SolrParams) is performing a set 
operation instead of an add
 Key: SOLR-6576
 URL: https://issues.apache.org/jira/browse/SOLR-6576
 Project: Solr
  Issue Type: Bug
Reporter: Steve Davids
 Fix For: 5.0, Trunk


Came across this bug by attempting to append multiple ModifiableSolrParam 
objects together but found the last one was clobbering the previously set 
values. The add operation should append the values to the previously defined 
values, not perform a set operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6576) ModifiableSolrParams#add(SolrParams) is performing a set operation instead of an add

2014-09-30 Thread Steve Davids (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Davids updated SOLR-6576:
---
Attachment: SOLR-6576.patch

Fix + tests added in attached patch.

 ModifiableSolrParams#add(SolrParams) is performing a set operation instead of 
 an add
 

 Key: SOLR-6576
 URL: https://issues.apache.org/jira/browse/SOLR-6576
 Project: Solr
  Issue Type: Bug
Reporter: Steve Davids
 Fix For: 5.0, Trunk

 Attachments: SOLR-6576.patch


 Came across this bug by attempting to append multiple ModifiableSolrParam 
 objects together but found the last one was clobbering the previously set 
 values. The add operation should append the values to the previously defined 
 values, not perform a set operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5981) CheckIndex modifies index without write.lock

2014-09-30 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-5981:

Attachment: LUCENE-5981.patch

Here's one patch: 

* makes CheckIndex take boolean readOnly (disallows modifications if this is 
true, otherwise obtains write.lock). 
* makes CheckIndex Closeable to release any lock.
* fixed CheckIndex main() to actually call close() on the Directory always.
* and moves main() logic to doMain() so its easier to test without it shutting 
down the JVM.
* adds a simple test.

Its a little complicated (yeah the stupid readOnly param) because I thought it 
was overkill to require it to obtain write.lock in the typical case where you 
are not going to let it drop segments. But when you are, its important to make 
sure nothing is changing stuff out from under you.

 CheckIndex modifies index without write.lock
 

 Key: LUCENE-5981
 URL: https://issues.apache.org/jira/browse/LUCENE-5981
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir
 Attachments: LUCENE-5981.patch


 Instead it asks you nicely to not do that.
 Due to the way this is implemented, if you choose to drop corrupt segments, 
 it should obtain the lock before actually doing any reads too, or it might 
 lose more than you think or do other strange stuff.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-5982) Tools in misc/ like index splitters need to obtain write.lock

2014-09-30 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-5982:
---

 Summary: Tools in misc/ like index splitters need to obtain 
write.lock
 Key: LUCENE-5982
 URL: https://issues.apache.org/jira/browse/LUCENE-5982
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Robert Muir


Just to prevent anyone from using these from accidental corruption, we should 
obtain it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.7.0_67) - Build # 4346 - Failure!

2014-09-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4346/
Java: 32bit/jdk1.7.0_67 -client -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  org.apache.solr.TestDistributedGrouping.testDistribSearch

Error Message:
Request took too long during query expansion. Terminating request.

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Request 
took too long during query expansion. Terminating request.
at 
__randomizedtesting.SeedInfo.seed([419F3036047D5A97:C079BE2E73223AAB]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:512)
at 
org.apache.solr.TestDistributedGrouping.simpleQuery(TestDistributedGrouping.java:274)
at 
org.apache.solr.TestDistributedGrouping.doTest(TestDistributedGrouping.java:262)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:875)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_40-ea-b04) - Build # 11359 - Failure!

2014-09-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11359/
Java: 64bit/jdk1.8.0_40-ea-b04 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.testDistribSearch

Error Message:
no exception matching expected: 400: Request took too long during query 
expansion. Terminating request.

Stack Trace:
java.lang.AssertionError: no exception matching expected: 400: Request took too 
long during query expansion. Terminating request.
at 
__randomizedtesting.SeedInfo.seed([80C6DC45846A7D75:120525DF3351D49]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.assertFail(CloudExitableDirectoryReaderTest.java:101)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTimeoutTests(CloudExitableDirectoryReaderTest.java:75)
at 
org.apache.solr.cloud.CloudExitableDirectoryReaderTest.doTest(CloudExitableDirectoryReaderTest.java:54)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (SOLR-6576) ModifiableSolrParams#add(SolrParams) is performing a set operation instead of an add

2014-09-30 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154216#comment-14154216
 ] 

Hoss Man commented on SOLR-6576:


I haven't looked at this closely, but I think the behavior you are observing is 
intentional -- although it almost certainly needs better docs, and is probably 
not the best name.

I think the intent here was for ModifiableSolrParam.add(SolrParams) to be 
analogous to Map.putAll(Map).  It's not a shortcut for adding all of _values_ 
associated with each key in the argument SolrParams object; it's a method for 
updating the current SolrParams object to have all of the _params_ (keys and 
values) in the argument SolrParams object

Since the semantics are clearly ambiguous, and a change like this could really 
screw things up for existing users who expect the current behavior, the best 
course of action may be to deprecate the method completely and add new methods 
with more clear names?

(FWIW: your desired goal is exactly what SolrParams.wrapAppended(SolrParams) 
was designed for.)

 ModifiableSolrParams#add(SolrParams) is performing a set operation instead of 
 an add
 

 Key: SOLR-6576
 URL: https://issues.apache.org/jira/browse/SOLR-6576
 Project: Solr
  Issue Type: Bug
Reporter: Steve Davids
 Fix For: 5.0, Trunk

 Attachments: SOLR-6576.patch


 Came across this bug by attempting to append multiple ModifiableSolrParam 
 objects together but found the last one was clobbering the previously set 
 values. The add operation should append the values to the previously defined 
 values, not perform a set operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6517) CollectionsAPI call ELECTPREFERREDLEADERS

2014-09-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154248#comment-14154248
 ] 

Noble Paul commented on SOLR-6517:
--

I am confused by the 'sliceUnique' feature.  What is the harm if I set two 
preferredLeaders? All that Solr needs to do is , just choose any one who is 
available. Users would prefer this. They can set choose 2 nodes and even if one 
goes down the other can take up the role. This is how overseer roles is set. I 
can set as many nodes as overseers

Another point I want to bring in is consistency in naming. We have something 
called overseer role for preferred overseers . We would need to make the 
naming consistent for these two features.

bq.I'm particularly interested in the joinAtHead parameter, seems like it's 
exactly what I need...

The joinAtHead is a part of the overseer role feature. Please read through 
SOLR-6095 to know how it works 

It is explained well here 
https://issues.apache.org/jira/browse/SOLR-6095?focusedCommentId=14032386page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14032386

 CollectionsAPI call ELECTPREFERREDLEADERS
 -

 Key: SOLR-6517
 URL: https://issues.apache.org/jira/browse/SOLR-6517
 Project: Solr
  Issue Type: New Feature
Affects Versions: 5.0, Trunk
Reporter: Erick Erickson
Assignee: Erick Erickson

 Perhaps the final piece of SOLR-6491. Once the preferred leadership roles are 
 assigned, there has to be a command make it so Mr. Solr. This is something 
 of a placeholder to collect ideas. One wouldn't want to flood the system with 
 hundreds of re-assignments at once. Should this be synchronous or asnych? 
 Should it make the best attempt but not worry about perfection? Should it???
 a collection=name parameter would be required and it would re-elect all the 
 leaders that were on the 'wrong' node
 I'm thinking an optionally allowing one to specify a shard in the case where 
 you wanted to make a very specific change. Note that there's no need to 
 specify a particular replica, since there should be only a single 
 preferredLeader per slice.
 This command would do nothing to any slice that did not have a replica with a 
 preferredLeader role. Likewise it would do nothing if the slice in question 
 already had the leader role assigned to the node with the preferredLeader 
 role.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 637 - Still Failing

2014-09-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/637/

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
Resource in scope SUITE failed to close. Resource was registered from thread 
Thread[id=6658, name=qtp1091572250-6658, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest], registration stack trace below.

Stack Trace:
com.carrotsearch.randomizedtesting.ResourceDisposalError: Resource in scope 
SUITE failed to close. Resource was registered from thread Thread[id=6658, 
name=qtp1091572250-6658, state=RUNNABLE, 
group=TGRP-CollectionsAPIDistributedZkTest], registration stack trace below.
at java.lang.Thread.getStackTrace(Thread.java:1589)
at 
com.carrotsearch.randomizedtesting.RandomizedContext.closeAtEnd(RandomizedContext.java:166)
at 
org.apache.lucene.util.LuceneTestCase.closeAfterSuite(LuceneTestCase.java:688)
at 
org.apache.lucene.util.LuceneTestCase.wrapDirectory(LuceneTestCase.java:1231)
at 
org.apache.lucene.util.LuceneTestCase.newDirectory(LuceneTestCase.java:1122)
at 
org.apache.lucene.util.LuceneTestCase.newDirectory(LuceneTestCase.java:1114)
at 
org.apache.solr.core.MockDirectoryFactory.create(MockDirectoryFactory.java:47)
at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:350)
at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:275)
at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:487)
at org.apache.solr.core.SolrCore.init(SolrCore.java:793)
at org.apache.solr.core.SolrCore.init(SolrCore.java:651)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:491)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:466)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleCreateAction(CoreAdminHandler.java:575)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestInternal(CoreAdminHandler.java:199)
at 
org.apache.solr.handler.admin.CoreAdminHandler.handleRequestBody(CoreAdminHandler.java:188)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at 
org.apache.solr.servlet.SolrDispatchFilter.handleAdminRequest(SolrDispatchFilter.java:744)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:253)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:202)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:137)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:229)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.server.handler.GzipHandler.handle(GzipHandler.java:301)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1077)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.server.ssl.SslSocketConnector$SslConnectorEndPoint.run(SslSocketConnector.java:670)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.AssertionError: 

need help to change 4.6.1 build script

2014-09-30 Thread Yingwei Zhang
Hi - we are using Solr 4.6.1 and I want to make some changes build my own
version of the library. The challenge is I only want to change Solr, not
Lucene. So I want to make a Solr version 4.6.1.yw, but it will still
depends on Lucene 4.6.1. Is there an easy way to split the version of
Lucene dependency and the version of Solr itself?

Thanks!

Yingwei Zhang
@bloomreach


[JENKINS] Lucene-Solr-5.x-Linux (64bit/jdk1.8.0_20) - Build # 11207 - Failure!

2014-09-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11207/
Java: 64bit/jdk1.8.0_20 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
ERROR: SolrZkClient opens=20 closes=19

Stack Trace:
java.lang.AssertionError: ERROR: SolrZkClient opens=20 closes=19
at __randomizedtesting.SeedInfo.seed([9E60919BEA3814FF]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.SolrTestCaseJ4.endTrackingZkClients(SolrTestCaseJ4.java:455)
at org.apache.solr.SolrTestCaseJ4.afterClass(SolrTestCaseJ4.java:188)
at sun.reflect.GeneratedMethodAccessor27.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:790)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.CollectionsAPIDistributedZkTest

Error Message:
4 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest: 1) Thread[id=764, 
name=zkCallback-89-thread-2, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)2) Thread[id=629, 
name=zkCallback-89-thread-1, state=TIMED_WAITING, 
group=TGRP-CollectionsAPIDistributedZkTest] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
 at 
java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
 at 
java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941) 
at 
java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)   
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127) 
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
at java.lang.Thread.run(Thread.java:745)3) Thread[id=624, 

[jira] [Commented] (SOLR-6576) ModifiableSolrParams#add(SolrParams) is performing a set operation instead of an add

2014-09-30 Thread Steve Davids (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154316#comment-14154316
 ] 

Steve Davids commented on SOLR-6576:


Yea, this is a bit misleading as ModifiableSolrParams.add( String name, String 
... val ) says:

bq. Add the given values to any existing name

The behavior of this particular method works as expected, I would likewise 
assume that the add for SolrParams would work just the same way. That would 
be like saying we had a map with two methods: put(K key, V value) and 
putAll(Map? extends K,? extends V m) but did two completely different things.

So in my head I would think the method for the current functionality would 
mimic the set capability:

bq. Replace any existing parameter with the given name.

and should be named appropriately. Also, SolrParams.wrapAppended(SolrParams) is 
deprecated so that isn't very re-assuring to use :)

 ModifiableSolrParams#add(SolrParams) is performing a set operation instead of 
 an add
 

 Key: SOLR-6576
 URL: https://issues.apache.org/jira/browse/SOLR-6576
 Project: Solr
  Issue Type: Bug
Reporter: Steve Davids
 Fix For: 5.0, Trunk

 Attachments: SOLR-6576.patch


 Came across this bug by attempting to append multiple ModifiableSolrParam 
 objects together but found the last one was clobbering the previously set 
 values. The add operation should append the values to the previously defined 
 values, not perform a set operation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: need help to change 4.6.1 build script

2014-09-30 Thread Shawn Heisey
On 9/30/2014 8:48 PM, Yingwei Zhang wrote:
 Hi - we are using Solr 4.6.1 and I want to make some changes build my
 own version of the library. The challenge is I only want to change Solr,
 not Lucene. So I want to make a Solr version 4.6.1.yw, but it will still
 depends on Lucene 4.6.1. Is there an easy way to split the version of
 Lucene dependency and the version of Solr itself?

The source code for Solr 4.6.1 also includes all of the source code for
Lucene 4.6.1.  If you only change the Solr source code, then the Lucene
jars that get built will be functionally identical to standard Lucene
4.6.1.  They would have slightly different version numbers from the
official release, but they could be swapped with the officially released
jars with no problem.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_40-ea-b04) - Build # 11360 - Still Failing!

2014-09-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/11360/
Java: 64bit/jdk1.8.0_40-ea-b04 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  org.apache.solr.core.ExitableDirectoryReaderTest.testPrefixQuery

Error Message:


Stack Trace:
java.lang.AssertionError: 
at 
__randomizedtesting.SeedInfo.seed([51BE64D16A3B3F99:E2E950930EE6DB1C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.SolrTestCaseJ4.assertQEx(SolrTestCaseJ4.java:850)
at 
org.apache.solr.core.ExitableDirectoryReaderTest.testPrefixQuery(ExitableDirectoryReaderTest.java:61)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:43)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)




Build Log:
[...truncated 11877 lines...]
   [junit4] Suite: org.apache.solr.core.ExitableDirectoryReaderTest
   [junit4]   2 Creating 

[jira] [Commented] (SOLR-6517) CollectionsAPI call ELECTPREFERREDLEADERS

2014-09-30 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14154339#comment-14154339
 ] 

Erick Erickson commented on SOLR-6517:
--

Hmmm, the discussion at SOLR-6095 will serve me in good stead, thanks. 

Let's take this a bit at a time.

bq: I am confused by the 'sliceUnique' feature. What is the harm if I set two 
preferredLeaders? 

See the discussion at SOLR-6491 about why sysadmins need to impose some order. 
This is not an absolute thing, IOW if the preferred leader isn't electable, 
we'll fall back to the current election process. What we're talking about here 
is shard leadership and relieving hotspots that are a consequence of the 
current algorithm for electing shard leaders. Note that in tandem is SOLR-6513, 
which automatically assigns exactly one sliceUnique role per slice across all 
the physical nodes that host that collection. preferredLeader is a property 
that a-priori has this restriction. Other properties can be sliceUnique as well 
(or not, for properties other than preferredLeader this is up to the user).

bq: All that Solr needs to do is , just choose any one who is available. Users 
would prefer this.

I can put you in touch with users of large, complex installations who will 
explicitly disagree, see the discussion at SOLR-6491. Again, there's no 
requirement at all that any installation have anything to do at all with the 
preferred leader role, in which case Solr's behavior will be unchanged. The 
user has to either 1 assign the preferredLeader property or 2 use the (new) 
command to balance a property, one per shard, across all the nodes in a 
collection (see SOLR-6513).

bq: They can set choose 2 nodes and even if one goes down the other can take up 
the role. This is how overseer roles is set. I can set as many nodes as 
overseers

I'm totally unclear here. There can be only one leader per slice. And there's 
nothing about the special preferredLeader property that prevents this, if the 
preferredLeader is unavailable the current election process is followed.

bq: The joinAtHead is a part of the overseer role feature.

Either I'm hallucinating or it's _also_ part of shard leadership election. I 
confess I hadn't tracked down why I was hitting LeaderElector.joinElection 
twice when bringing up a replica, now I know why. Once for adding an ephemeral 
node for overseer election, and again for adding an ephemeral node for 
shard-leader election. _Very_ good to know so I don't screw up the overseer 
election! The discussion you pointed to looks like it's the model I'll look 
into for this bit as well.


 CollectionsAPI call ELECTPREFERREDLEADERS
 -

 Key: SOLR-6517
 URL: https://issues.apache.org/jira/browse/SOLR-6517
 Project: Solr
  Issue Type: New Feature
Affects Versions: 5.0, Trunk
Reporter: Erick Erickson
Assignee: Erick Erickson

 Perhaps the final piece of SOLR-6491. Once the preferred leadership roles are 
 assigned, there has to be a command make it so Mr. Solr. This is something 
 of a placeholder to collect ideas. One wouldn't want to flood the system with 
 hundreds of re-assignments at once. Should this be synchronous or asnych? 
 Should it make the best attempt but not worry about perfection? Should it???
 a collection=name parameter would be required and it would re-elect all the 
 leaders that were on the 'wrong' node
 I'm thinking an optionally allowing one to specify a shard in the case where 
 you wanted to make a very specific change. Note that there's no need to 
 specify a particular replica, since there should be only a single 
 preferredLeader per slice.
 This command would do nothing to any slice that did not have a replica with a 
 preferredLeader role. Likewise it would do nothing if the slice in question 
 already had the leader role assigned to the node with the preferredLeader 
 role.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org